0% found this document useful (0 votes)
8 views66 pages

software Engineering

The document provides an overview of software engineering, defining software and its types, including system, programming, and application software. It details the principles and goals of software engineering, emphasizing maintainability, reliability, efficiency, and understandability, while also outlining its historical evolution and key milestones. Additionally, it discusses the challenges faced in the field, such as the software crisis and the impact of the internet on software development.

Uploaded by

OKPE CHRISTIAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views66 pages

software Engineering

The document provides an overview of software engineering, defining software and its types, including system, programming, and application software. It details the principles and goals of software engineering, emphasizing maintainability, reliability, efficiency, and understandability, while also outlining its historical evolution and key milestones. Additionally, it discusses the challenges faced in the field, such as the software crisis and the impact of the internet on software development.

Uploaded by

OKPE CHRISTIAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Basic Concept of Software Engineering

Computer software

Definition of software
Computer software is a general name for all forms of programs. A program itself is
a sequence of instruction which the computer follows to perform a given task.

Types of software
Software can be categorised into three major types namely system software,
programming software and application software..

System software
System software helps to run the computer hardware and the entire computer system.
It includes the following:

• device drivers
• operating systems
• servers
• utilities
• windowing systems

The function of systems software is to assist the applications programmer from the details
of the particular computer complex being used, including such peripheral devices as
communications, printers, readers, displays and keyboards, and also to partition the
computer's resources such as memory and processor time in a safe and stable manner.

Programming software
Programming software offers tools to assist a programmer in writing programs, and
software using different programming languages in a more convenient way. The tools
include:

• compilers
• debuggers
• interpreters
• linkers
• text editors

Application software
Application software is a class of software which the user of computer needs to accomplish
one or more definite tasks. The common applications include the following:

• industrial automation
• business software
• computer games
• quantum chemistry and solid state physics software
1
• telecommunications (i.e., the internet and everything that flows on it)
• databases
• educational software
• medical software
• military software
• molecular modeling software
• photo-editing
• spreadsheet
• Word processing
• Decision making software

What is Software Engineering?

Definition of Software Engineering.

Software engineering is the application of a systematic, disciplined, quantifiable


approach to the development, operation, and maintenance of software, and the study of
these approaches. In other words, it is the application of engineering to software.

Sub-disciplines of Software engineering

Software engineering can be divided into ten sub-disciplines. They are as follows:

• Software requirements: The elicitation, analysis, specification, and validation of


requirements for software.
• Software design: Software Design consists of the steps a programmer should do
before they start coding the program in a specific language.It is usually done with
Computer-Aided Software Engineering (CASE) tools and use standards for the
format, such as the Unified Modeling Language (UML).
• Software development: It is construction of software through the use of
programming languages.
• Software testing Software Testing is an empirical investigation conducted to
provide stakeholders with information about the quality of the product or service
under test.
• Software maintenance: This deals with enhancements of Software systems to solve
the problems the may have after being used for a long time after they are first
completed..
• Software configuration management: is the task of tracking and controlling
changes in the software. Configuration management practices include revision
control and the establishment of baselines.
• Software engineering management: The management of software systems borrows
heavily from project management.
• Software development process: A software development process is a structure
imposed on the development of a software product. There are several models for
such processes, each describing approaches to a variety of tasks or activities that take
place during the process.
• Software engineering tools, (CASE which stands for Computer Aided Software
Engineering) CASE tools are a class of software that automates many of the
activities involved in various life cycle phases.
• Software quality The totality of functionality and features of a software product that
bear on its ability to satisfy stated or implied needs.

2
Software Engineering Goals and Principles

Goals
Stated requirements when they are initially specified for systems are usually incomplete.
Apart from accomplishing these stated requirements, a good software system must be
able to easily support changes to these requirements over the system's life. Therefore, a
major goal of software engineering is to be able to deal with the effects of these changes.
The software engineering goals include:

• Maintainability: Changes to software without increasing the complexity of the


original system design should be possible.

• Reliability: The software should be able to prevent failure in design and


construction as well as recover from failure in operation. In other words, the software
should perform its intended function with the required precision at all times.

• Efficiency: The software system should use the resources that are available in an
optimal manner.

• Understand ability: The software should accurately model the view the reader has
of the real world. Since code in a large, long-lived software system is usually read
more times than it is written, it should be easy to read at the expense of being easy to
write, and not the other way around.

Principles
Sounds engineering principles must be applied throughout development, from the design
phase to final fielding of the system in order to attain a software system that satisfies the
above goals. These include:

• Abstraction: The purpose of abstraction is to bring out essential properties while


omitting inessential detail. The software should be organized as a ladder of
abstraction in which each level of abstraction is built from lower levels. The code
is sufficiently conceptual so the user need not have a great deal of technical
background in the subject. The reader should be able to easily follow the logical
path of each of the various modules. The decomposition of the code should be
clear.

• Information Hiding: The code should include no needless detail. Elements that do
not affect other segment of the system are inaccessible to the user, so that only the
intended operations can be performed. There are no "undocumented features".

• Modularity: The code is purposefully structured. Components of a given module


are logically or functionally dependent.

• Localization: The breakdown and decomposition of the code is rational. Logically


related computational units are collected together in modules.

• Uniformity: The notation and use of comments, specific keywords and formatting
is consistent and free from unnecessary differences in other parts of the code.

3
• Completeness: Nothing is deliberately missing from any module. All important or
relevant components are present both in the modules and in the overall system as
appropriate.

• Confirm ability: The modules of the program can be tested individually with
adequate rigor. This gives rise to a more readily alterable system, and enables the
reusability of tested components.

History of Software Engineering

Overview of Software Engineering.

In the 1968, software engineering originated from the NATO Software Engineering
Conference. It came at the time of software crisis. The field of software engineering has
since then been growing gradually as a study dedicated to creating qualified software. In
spite of being around for a long time, it is a relatively young field compared to other fields
of engineering. Though some people are still confused whether software engineering is
actually engineering because software is more of invisible course. Although it is disputed
what impact it has had on actual software development over the last more than 40 years, the
field's future looks bright according to Money Magazine and Salary.com who rated
"software engineering" as the best job in America in 2006.

The early computers had their software wired with the hardware thereby making them to be
inflexible because the software could not easily be upgraded from one machine to another.
This problem necessitated the development. Programming languages started to appear in
the 1950s and this was also another major step in abstraction. Major languages such as
FORTRAN, ALGOL, and COBOL were released in the late 1950s to deal with scientific,
algorithmic, and business problems respectively. E.W. Dijkstra wrote his seminal paper,
"Go To Statement Considered Harmful", in 1968 and David Parnas introduced the key
concept of modularity and information hiding in 1972 to help programmers deal with the
ever increasing complexity of software systems. A software system for managing the
hardware called an operating system was also introduced, most notably by Unix in 1969. In
1967, the Simula language introduced the object-oriented programming paradigm.

The technological advancement in software has always been driven by the ever changing
manufacturing of various types of computer hardware. The more the new technologies
upgrade, from vacuum tube to transistor, and to microprocessor were emerging, the more
the necessity to upgrade and even write new software. In the mid 1980s software experts
had a consensus for centralised construction of software with the use of software
development Life Cycle from system analysis. This period gave birth to object-oriented
programming languages. Open-source software started to appear in the early 90s in the form
of Linux and other software introducing the "bazaar" or decentralized style of constructing
software.[10] Then the Internet and World Wide Web hit in the mid 90s changing the
engineering of software once again. Distributed Systems gained sway as a way to design
systems and the Java programming language was introduced as another step in abstraction
having its own virtual machine. Programmers collaborated and wrote the Agile Manifesto
that favored more light weight processes to create cheaper and more timely software.

Evolution of Software Engineering

There are a number of areas where the evolution of software engineering is notable:

4
• Professionanism: The early 1980s witnessed software engineering becoming a full-
fledged profession like computer science and other engineering fields.

• Impact of women: In the early days of computer development ( 1940s, 1950s, and
1960s,), the men were found in the hardware sector because of the mental demand
of hardwaring heavy duty equipment which was too strenuous for women. The
witing of software was delegated to the women. Some of the women who were into
many programming jobs at this time include Grace Hopper and Jamie Fenton.
Today, many fewer women work in software engineering than in other professions,
this reason for this is yet to be ascertained.
• Processes: Processes have become a great part of software engineering and re
praised for their ability to improve software and sharply condemned for their
potential to narrow programmers.
• Cost of hardware: The relative cost of software versus hardware has changed
substantially over the last 50 years. When mainframes were costly and needed large
support staffs, the few organizations purchasing them also had enough to fund big,
high-priced custom software engineering projects. Computers can now be said to be
much more available and much more powerful, which has a lot of effects on
software. The larger market can sustain large projects to create commercial
packages, as the practice of companies such as Microsoft. The inexpensive
machines permit each programmer to have a terminal capable of fairly rapid
compilation. The programs under consideration can use techniques such as garbage
collection, which make them easier and faster for the programmer to write.
Conversely, many fewer organizations are concerned in employing programmers
for large custom software projects, instead using commercial packages as much as
possible.

The Pioneering Era

The most key development was that new computers were emerging almost every year or
two, making existing ones outdated. Programmers had to rewrite all their programs to run
on these new computers. They did not have computers on their desks and had to go to the
"computer room" or ―computer laboratory‖. Jobs were run by booking for machine time
or by operational staff. Jobs were run by inserting punched cards for input into the
computer‘s card reader and waiting for results to come back on the printer.

The field was so new that the idea of management using schedule was absent. Guessing the
completion time of project predictions was almost unfeasible Computer hardware was
application-based. Scientific and business tasks needed different machines. High level
languages like FORTRAN, COBOL, and ALGOL were developed to take care of the need
to frequently translate old software to meet the needs of new machines. Systems software
was given out for free by the vendors since it must to be installed in the computer before it
is sold. Custom software was sold by a few companies but no sale of packaged software.

Organisation such as like IBM's scientific user group SHARE gave out software free and as
a result reuse was order of the day. Academia did not yet teach the principles of computer
science. Modular programming and data abstraction were already being used in
programming.

1945 to 1965: The origins

5
The term software engineering came into existence in the late 1950s and early 1960s.
Programmers have always known about civil, electrical, and computer engineering but
fount it difficult to marry engineering with software.

In 1968 and 1969, two conferences on software engineering were sponsored by the NATO
Science Committee. This gave the field its initial boost. It was widely believed that these
conferences marked the official start of the profession of software engineering.

1965 to 1985: The software crisis

Software engineering was prompted by the software crisis of the 1960s, 1970s, and 1980s.
It was the crisis that identified many of the problems of software development. This era was
also characterised by: run over budget and schedule, property damage and loss of life caused
by poor project management. Initially the software crisis was defined in terms of
productivity, but advanced to emphasize quality.

• Cost and Budget Overruns: The OS/360 operating system was a classic example. It
was a decade-long project from the 1960s and eventually produced one of the most
complex software systems at the time.
• Property Damage: Software defects can result in property damage. Poor software
security allows hackers to steal identities, costing time, money, and reputations.
• Life and Death: Software defects can kill. Some embedded systems used in
radiotherapy machines failed so disastrously that they administered poisonous doses
of radiation to patients. The most famous of these failures is the Therac 25 incident.

1985 to 1989: No silver bullet

For years, solving the software crisis was the primary concern for researchers and
companies producing software tools. Apparently, they proclaim every new technology and
practice from the 1970s to the 1990s as a silver bullet to solve the software crisis. Tools,
discipline, formal methods, process, and professionalism were published as silver bullets:
• Tools: Particularly underline tools include: Structured programming, objectoriented
programming, CASE tools, Ada, Java, documentation, standards, and
Unified Modeling Language were touted as silver bullets.
• Discipline: Some pundits argued that the software crisis was due to the lack of
discipline of programmers.
• Formal methods: Some believed that if formal engineering methodologies would be
applied to software development, then production of software would become as
predictable an industry as other branches of engineering. They advocated proving all
programs correct.
• Process: Many advocated the use of defined processes and methodologies like the
Capability Maturity Model.
• Professionalism: This led to work on a code of ethics, licenses, and professionalism.

Fred Brooks (1986), No Silver Bullet article, argued that no individual technology or
practice would ever make a 10-fold improvement in productivity within 10 years.

Debate about silver bullets continued over the following decade. Supporter for Ada,
components, and processes continued arguing for years that their favorite technology would

6
be a silver bullet. Skeptics disagreed. Eventually, almost everyone accepted that no silver
bullet would ever be found. Yet, claims about silver bullets arise now and again, even today.

” No silver bullet” means different things to different people; some take” no silver bullet”
to mean that software engineering failed. The pursuit for a single key to success never
worked. All known technologies and practices have only made incremental improvements
to productivity and quality. Yet, there are no silver bullets for any other profession, either.
Others interpret no silver bullet as evidence that software engineering has finally matured
and recognized that projects succeed due to hard work.

However, it could also be pointed out that there are, in fact, a series of silver bullets today,
including lightweight methodologies, spreadsheet calculators, customized browsers, in-site
search engines, database report generators, integrated design-test coding-editors with
memory/differences/undo, and specialty shops that generate niche software, such as
information websites, at a fraction of the cost of totally customized website development.
Nevertheless, the field of software engineering looks as if it is too difficult and different for
a single "silver bullet" to improve most issues, and each issue accounts for only a small
portion of all software problems.

3.6 1990 to 1999: Importance of the Internet


The birth of internet played a major role in software engineering. With its arrival,
information could be gotten from the World Wide Web speedily. Programmers could handle
illustrations, maps, photographs, and other images, plus simple animation, at a very fast
rate.

It became easier to display and retrieve information as a result of the usage of browser on
the HTML language. The widespread of network connections brought in computer viruses
and worms on MS Windows computers. These new technologies brought in a lot good
innovations such as e-mailing, web-based searching, e-education to to mention a few. As a
result, many software systems had to be re-designed for international searching. It was also
required to translate the information flow in multiple foreign languages Many software
systems were designed for multi-language usage, based on design concepts from human
translators.

2000 to Present: Lightweight Methodologies

This era witnessed increasing demand for software in many smaller organizations. There
was also the need for inexpensive software solutions and this led to the growth of simpler,
faster methodologies that developed running software, from requirements to deployment.
There was a change from rapid-prototyping to entire lightweight methodologies. For
example, Extreme Programming (XP), tried to simplify many areas of software engineering,
including requirements gathering and reliability testing for the growing, vast number of
small software systems.

What is it today

Software Engineering as a profession is now being defined as a field of human experts in


boundary and content. Software Engineering is rated as one of the best job in developed
economies in terms of growth, pay, and flexibility and so on.

7
Important figures in the history of software engineering Listed
below are some renowned software engineers:

• Charles Bachman (born 1924) is particularly known for his work in the area of
databases.
• Fred Brooks (born 1931)) best-known for managing the development of OS/360.
• Peter Chen, known for the development of entity-relationship modeling.
• Edsger Dijkstra (1930-2002) developed the framework for proper programming.
• David Parnas (born 1941) developed the concept of information hiding in modular
programming.

Software Engineer

Who is a Software Engineer?

A software engineer is an individual who applies the principles of software engineering


to the design, development, testing, and evaluation of the software and systems in order
to meet with client‘s requirements. He/she fully designs software, tests, debugs and
maintains it. Software engineer needs knowledge of varieties of computer programming
languages and applications; to enable him cope with the varieties of works before him. In
view of this, he can sometimes be referred to as a computer programmer.

Functions of a Software Engineer

• Analyses information to determine, recommend, and plan computer specifications


and layouts, and peripheral equipment modifications.

• Analyses user needs and software requirements to determine feasibility of design


within time and cost constraints.

• Coordinates software system installation and monitor equipment functioning to


ensure specifications are met.

• Designs, develops and modifies software systems, using scientific analysis and
mathematical models to predict and measure outcome and consequences of design.
• Determines system performance standards.

• Develops and direct software system testing and validation procedures, programming,
and documentation.

• Modifies existing software to correct errors; allow it to acclimatise to new hardware,


or to improve its performance.

• Obtains and evaluates information on factors such as reporting formats required,


costs, and security needs to determine hardware configuration.

8
• Stores, retrieves, and manipulates data for analysis of system capabilities and
requirements.

Technical and Functional Knowledge and requirements of a Software Engineer

Most employers commonly recognise the technical and functional knowledge statements
listed below as general occupational qualifications for Computer Software Engineers
Although it is not required for the software engineer to have all of the knowledge on the list
in order to be a successful performer, adequate knowledge, skills, and abilities are necessary
for effective delivery of service.

The Software Engineer should have Knowledge of:

• Circuit boards, processors, chips, electronic equipment, and computer hardware


and software, as well as applications and programming.
• Practical application of engineering science and technology. This includes
applying principles, techniques, procedures, and equipment to the design and
production of various goods and services.
• Arithmetic, algebra, geometry, calculus, statistics, and their applications.

• Structure and content of the English language including the meaning and
spelling of words, rules of composition, and grammar.

• Business and management principles involved in strategic planning, resource


allocation, human resources modelling, leadership technique, production
methods, and coordination of human and material resources.

• Principles and methods for curriculum and training design, teaching and
instruction for individuals and groups, and the measurement of training
effects.

• Design techniques, tools, and principles involved in production of precision


technical plans, blueprints, drawings, and models.
• Administrative and clerical procedures and systems such as word processing,
managing files and records, stenography and transcription, designing forms,
and other office procedures and terminology.

• Principles and processes for providing customer and personal services. This
includes customer needs assessment, meeting quality standards for services, and
evaluation of customer satisfaction.

• Transmission, broadcasting, switching, control, and operation of


telecommunications systems.

Occupational features of a software Engineer

Occupations have traits or characteristics which give important clues about the nature of the
work and work environment and offer you an opportunity to match your own personal
interests to a specific occupation.

9
Software engineer occupational characteristics or features can be categorised as: Realistic,
Investigative and Conventional as described below:

Realistic — Realistic occupations frequently involve work activities that include practical,
hands-on problems and solutions. They often deal with plants, animals, and real-world
materials like wood, tools, and machinery. Many of the occupations require working
outside, and do not involve a lot of paperwork or working closely with others.

Investigative — Investigative occupations frequently involve working with ideas, and


require an extensive amount of thinking. These occupations can involve searching for facts
and figuring out problems mentally.

Software Crisis.

What is Software Crisis?

The term software crisis was used in the early days of software engineering. It was used to
describe the impact of prompt increases in computer power and the difficulty of the
problems which could be tackled. In essence, it refers to the difficulty of writing correct,
understandable, and verifiable computer programs. The sources of the software crisis are
complexity, expectations, and change.

Conflicting requirements has always hindered software development process. For instance,
while users demand a large number of features, customers generally want to minimise the
amount they must pay for the software and the time required for its development.

F. L. Bauer coined the term "software crisis" at the first NATO Software Engineering
Conference in 1968 at Garmisch, Germany. The term was used early in Edsger Dijkstra's
1972 ACM Turing Award Lecture:

The major cause of the software crisis is that the machines have become more powerful!
This implied that: as long as there were no machines, programming was no problem at
all; when there were few weak computers, programming became a mild problem, and now
with huge computers, programming has equally become a huge problem.

Manifestation of Software Crisis

The crisis manifested itself in several ways:

• Projects running over-budget.


• Projects running over-time.
• Software was very inefficient.
• Software was of low quality.
• Software often did not meet requirements.
• Projects were unmanageable and code difficult to maintain. Software was
never delivered.

Causes of Software Engineering Crisis

The challenging practical areas include: fiscal, human resource, infrastructure, and
marketing.. The very causes of failure in software development industries can be from two
areas twofold: 1) Poor marketing efforts, and 2) Lack of quality products.
10
Poor marketing efforts
The problem of poor marketing efforts is more noticeable in the developing economies,
where consumers of software products prefers imported software to the detriment of locally
developed ones. This problem is compounded by poor marketing approaches and the fact
that most of the hardware was not manufactured locally. Though the use of software in our
industries, service providing organizations, and other commercial institutions is increasing
appreciably, the demand of locally developed software products is not going faster at the
same rate.

One of the major reasons of this is lack of any established national policy that can speed up
the creation of internal market for locally developed software products. Relatively low price
of foreign (especially from the neighbouring country) software attracts the consumers in
acquiring foreign products rather than buying local one.

One may wants to ask why the clients will go for local software. In this situation, the
question may also be why is that the foreign software products are cheaper than the locally
developed software products? The answers to these questions are not far fetched. The cost
of initial take off of producing software product is significantly much higher than its
subsequent versions because the latter can be produced by merely copying the initial one.

Most of the foreign software products available in the market are their succeeding versions.
For this reason, the consumers in our country do not have to bear the initial cost of the
development. Furthermore, this software is more reliable as they already have reputable
high report. Many international commercial companies use these products efficiently.

On the contrary, most of the software firms in Bangladesh for example, need to charge the
initial cost for development for their clients even though the reliability of their products is
quite uncertain. Consequently, the local clients are not interested in buying local software
products. To change this situation, the government must take steps by imposing high tax on
foreign software products and by implementing strict copyright act for the use of software
products.

International market
Apart from developing internal software market, we also need to aim at the international
market. At present, as our software firms have no high report in developing software
products, competing with other country will just be a fruitless effort. India for example has
a high profile as far as software development is concern. India has been in global market for
at least twenty years. India can take advantage of buying software from global market
because of the long-time experience as well as availability of many high level IT experts at
relatively low cost compared with the developed countries. Apart from these, India has
professional immigrant communities in the US and in other developed countries who have
succeeded in influencing the global market to procure software projects for India.

We cannot, therefore compete with India at this time to buy software projects from the
global market. However, there is the need to have a policy to boost our marketing strategy
to procure global software projects. One of the ways to do this is to allow country like
Bangladesh through its embassies/high commissions to open up a special software
marketing unit in different developed countries. Apart from this our professional expatriates
living in the USA and other developed countries can also assist by setting up software firms
to procure software projects to be developed in Bangladesh at low cost.

In the area of software development, timing is a essential factor. Inability to deliver the
product to time can lead to loss of clients. . Our observation has shown that client cancel
11
out work order when the software firms failed to meet up the deadline. Failure to meet up
deadline for any software project may result in negative attitude to our software marketing
efforts

Pricing. One of the major challenges to software developer is how to put price on the
product. Most of the time, the question is "How much should our product go for. On one
hand, asking too little price will be jeopardized because in that case developers will no be
able to brake even. On the other hand, charging too much for the product will be a barrier
to our marketing efforts. In order to solve this problem, scientific economic theories needs
to be applied.

These theories must be applied when the software companies fix the prices of their product.
One major lesson here is that we that are just starting in the global software market should
minimise our profit margin.

Lack of quality products


Since most of the systems are to be used in real time environment, quality assurance is of
primary concern. Presently our software companies are yet to be on ground as far as
developing quality software is concerned. It will be of interest to note that presently we have
over 200 software developing firms and only 20 of them have earned ISO 9001 certification
and not even a single one has gotten CMM/CMM1 level 3. Even though certification is not
important yardstick for quality of software product, yet ISO certification is important
because it focuses on the general aspects of development to certify the quality. It must also
be stated that if a software product could pass at least level three of CMM/CMM1 then we
can classify this as quality product. The hindrances to achieving quality software on part of
our software industries are discussed below:

Lack of expertise in producing sound user requirements: Allowing the developing firms
to go through some defined software development steps as suggested in software
engineering discipline is a pathway to ensure the quality of software products.. The very
first step is to analyze the users' requirement and designing of the system vastly depends on
defining users' requirement precisely.

Ideally system analysts should do all sorts of analysis to produce user requirement analysis
documents. Regrettably, in Bangladesh, a few firms do not pay much attention to producing
sound user requirement documents. This reveals lack of theoretical knowledge in system
analysis and design. To produce high quality requirement analysis documents there is needs
for an in-depth theoretical knowledge in system analysis and design. But many of local
software development firms lack the expertise in this field. In order to rectify this problem,
academics in the field have to be consulted to give necessary assistance that will gear
towards producing sound user requirement analysis documents.

Lack of expertise in designing the system: Aside user requirement analysis, another
important aspect is the development process is the designing part of the software product.
The design of any system affects the effectiveness of any implemented software. Again, one
of the major problems confronting our software industries is non availability of expert
software designers. It is a fact to point out that out what we have on ground are programmers
or coders but the number of experienced and expert software engineers is till not many.

In fact, we rarely have resourceful persons who can guide large and complex software
projects properly our software industries. The result is that there are no quality end products
It may be mentioned here that sound academic knowledge in software engineering is a must
for developing a quality software system. A link between industries and academic

12
institutions can improve this situation. The utilisation of theoretical sound knowledge of
academics in industrial software project cannot be overlooked. Besides depending on the
complexity of the project, software firms may need to involve foreign experts for specific
period to complete the project properly.

Lack of knowledge in developing model

There is need to follow some specific model in software development process. The practice
in many software development firms is not to follow any particular model, and this has so
much affected the quality of software product. It is mandatory for a software developer,
therefore, to select a model prior to starting a software project so as to have quality product.

Absence of proper software testing procedure: For us to have quality software production
the issue of software testing should be taken with utmost seriousness. demands exhaustive
test to check its performance. Many theoretical testing methodologies abound to check the
performance and integrity of the software. It is rather unfortunate to note that many
developing firms go ahead to , hastily deliver the end products to their clients without
performing extensive test. The result of this is that many software products are not free from
bugs. It should be pointed here that fixing the bugs after is costlier than during the
developing time. It is therefore important for developers to perform the test phase of the
development before delivering the end product to the clients.

Inconsistent documentation: Documentation is a very important aspect of software


development. Most of the time, the document produced by some software firms is either
incomplete or inconsistent. Since software is ever-growing product, documentation in
coding must be produced and preserved for the future possible enhancement of the
software.

Solution to Software Crisis

Various processes and methodologies have been developed over the last few decades to
"tame" the software crisis, with varying degrees of success. However, it is widely agreed
that there is no "silver bullet" ― that is, no single approach which will prevent project
overruns and failures in all cases. In general, software projects which are large, complicated,
poorly-specified, and involve unfamiliar aspects, are still particularly vulnerable to large,
unanticipated problems

13
Overview of Software Development

Definition of Software Development

Software development is the set of activities that results in software products. Software
development may include research, new development, modification, reuse, re- engineering,
maintenance, or any other activities that result in software products. Particularly the first
phase in the software development process may involve many departments, including
marketing, engineering, research and development and general management.

The term software development may also refer to computer programming, the process of
writing and maintaining the source code.

Stages of Software Development

There are several different approaches to software development. While some take a more
structured, engineering-based approach, others may take a more incremental approach,
where software evolves as it is developed piece-by-piece. In general, methodologies share
some combination of the following stages of software development:

• Market research
• Gathering requirements for the proposed business solution
• Analyzing the problem
• Devising a plan or design for the software-based solution
• Implementation (coding) of the software
• Testing the software
• Deployment
• Maintenance and bug fixing

These stages are collectively referred to as the software development lifecycle (SDLC).,
These stages may be carried out in different orders, depending on approach to software
development. Time devoted on different stages may also vary. The detail of the
documentation produced at each stage may not be the same.. In ―waterfall‖ based
approach, stages may be carried out in turn whereas in a more "extreme" approach, the
stages may be repeated over various cycles or iterations. It is important to note that more
―extreme‖ approach usually involves less time spent on planning and documentation, and
more time spent on coding and development of automated tests. More ―extreme‖
approaches also encourage continuous testing throughout the development lifecycle. It
ensures bug-free product at all times. The ―waterfall‖ based approach attempts to assess
the majority of risks and develops a detailed plan for the software before implementation
(coding) begins. It avoids significant design changes and re-coding in later stages of the
software development lifecycle.

Each methodology has its merits and demerits. The choice of an approach to solving a
problem using software depends on the type of problem. If the problem is well
understood and a solution can be effectively planned out ahead of time, the more
"waterfall" based approach may work the best choice. On the other hand, if the problem
is unique (at least to the development team) and the structure of the software solution
cannot be easily pictured, then a more "extreme" incremental approach may work best..

14
Software Development Life Cycle Model

Definition of Life Cycle Model

Software life cycle models describe phases of the software cycle and the order in which
those phases are executed. There are a lot of models, and many companies adopt their own,
but all have very similar patterns. According to Raymond Lewallen (2005), the general, basic
model is shown below:

3.1 The General Model

General Life Cycle Model


Fig 1 the General Model

Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.

Each phase produces deliverables needed by the next phase in the life cycle.
Requirements are converted into design. Code is generated during implementation that is
driven by the design. Testing verifies the deliverable of the implementation phase against
requirements.

Waterfall Model

This is the most common life cycle models, also referred to as a linear-sequential life cycle
model. It is very simple to understand and use. In a waterfall model, each phase must be
completed before the next phase can begin. At the end of each phase, there is always a
review to ascertain if the project is in the right direction and whether or not to carry on or
abandon the project. Unlike the general model, phases do not overlap in a waterfall model.

Waterfall Life Cycle

Waterfall Life Cycle


Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.
15
Advantages

• Simple and easy to use.


• Easy to manage due to the rigidity of the model – each phase has specific deliverables
and a review process.
• Phases are processed and completed one at a time.
• Works well for smaller projects where requirements are very well understood.

Disadvantages

• Adjusting scope during the life cycle can kill a project


• No working software is produced until late during the life cycle.
• High amounts of risk and uncertainty.
• Poor model for complex and object-oriented projects.
• Poor model for long and ongoing projects.
• Poor model where requirements are at a moderate to high risk of changing.

V-Shaped Model

Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of
processes. Each phase must be completed before the next phase begins. Testing is
emphasized in this model more so than the waterfall model The testing procedures are
developed early in the life cycle before any coding is done, during each of the phases
preceding implementation.

Requirements begin the life cycle model just like the waterfall model. Before development
is started, a system test plan is created. The test plan focuses on meeting the functionality
specified in the requirements gathering.

The high-level design phase focuses on system architecture and design. An integration test
plan is created in this phase as well in order to test the pieces of the software systems
ability to work together.

The low-level design phase is where the actual software components are designed, and unit
tests are created in this phase as well.

The implementation phase is, again, where all coding takes place. Once coding is complete,
the path of execution continues up the right side of the V where the test plans developed
earlier are now put to use.

16
Fig 3 V-Shaped Life Cycle Model

Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.

Advantages
• Simple and easy to use.
• Each phase has specific deliverables.
• Higher chance of success over the waterfall model due to the development of test
plans early on during the life cycle.
• Works well for small projects where requirements are easily understood.

Disadvantages

• Very rigid, like the waterfall model.


• Little flexibility and adjusting scope is difficult and expensive.
• Software is developed during the implementation phase, so no early prototypes of the
software are produced.
• Model doesn‘t provide a clear path for problems discovered during testing phases.

Incremental Model

The incremental model is an intuitive approach to the waterfall model. It is a kind of a


―multi-waterfall‖ cycle. In that multiple development cycles take at this point. Cycles are
broken into smaller, more easily managed iterations. Each of the iterations goes through the
requirements, design, implementation and testing phases.

The first iteration produces a working version of software and this makes possible to have
working software early on during the software life cycle. Subsequent iterations build on the
initial software produced during the first iteration.

Incremental Life Cycle Model


17
Fig 4 Incremental Life Cycle Model

Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.

Advantages

• Generates working software quickly and early during the software life cycle.
• More flexible – inexpensive to change scope and requirements.
• Easier to test and debug during a smaller iteration.
• Easier to manage risk because risky pieces are identified and handled during its
iteration.
• Each of the iterations is an easily managed landmark

Disadvantages

• Each phase of an iteration is rigid and do not overlap each other.


• Problems as regard to system architecture may arise as a result of inability to gathered
requirements up front for the entire software life cycle.

Spiral Model

The spiral model is similar to the incremental model, with more emphases placed on risk
analysis. The spiral model has four phases namely Planning, Risk Analysis, Engineering and
Evaluation. A software project continually goes through these phases in iterations which are
called spirals. In the baseline spiral requirements are gathered and risk is assessed. Each
subsequent spiral builds on the baseline spiral.

Requirements are gathered during the planning phase. In the risk analysis phase, a process
is carried out to discover risk and alternate solutions. A prototype is produced at the end
of the risk analysis phase.

Software is produced in the engineering phase, alongside with testing at the end of the

18
phase. The evaluation phase provides the customer with opportunity to evaluate the
output of the project to date before the project continues to the next spiral.

In the spiral model, the angular component denotes progress, and the radius of the spiral
denotes cost.

Spiral Life Cycle Model

Fig 5 Spiral Life Cycle Model

Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.

Merits

• High amount of risk analysis


• Good for large and mission-critical projects.
• Software is produced early in the software life cycle.

Demerits

• Can be a costly model to use.


• Risk analysis requires highly specific expertise.
• Project‘s success is highly dependent on the risk analysis phase. Doesn‘t work
well for smaller projects.

Requirements Phase
Business requirements are gathered in this phase. This phase is the main center of
attention of the project managers and stake holders. Meetings with managers, stake
holders and users are held in order to determine the requirements. Th general questions
that require answers during a requirements gathering phase are: Who is going to use the
system? How will they use the system? What data should be input into the system? What
19
data should be output by the system? A list of functionality that the system should
provide, which describes functions the system should perform, business logic that
processes data, what data is stored and used by the system, and how the user interface
should work is produced at this point. The requirements development phase may have
been preceded by a feasibility study, or a conceptual analysis phase of the project. The
requirements phase may be divided into requirements elicitation (gathering the
requirements from stakeholders), analysis (checking for consistency and completeness),
specification (documenting the requirements) and validation (making sure the specified
requirements are correct)

In systems engineering, a requirement can be a description of what a system must do,


referred to as a Functional Requirement. This type of requirement specifies something
that the delivered system must be able to do. Another type of requirement specifies
something about the system itself, and how well it performs its functions. Such
requirements are often called Non-functional requirements, or 'performance requirements'
or 'quality of service requirements.' Examples of such requirements include usability,
availability, reliability, supportability, testability, maintainability, and (if defined in a way
that's verifiably measurable and unambiguous) ease-of-use.

Types of Requirements

Requirements are categorised as:

• Functional requirements which describe the functionality that the system is to


execute; for example, formatting some text or modulating a signal.
• Non-functional requirements which are the ones that act to constrain the solution.
Nonfunctional requirements are sometimes known as quality requirements or
Constraint requirements No matter how the problem is solved the constraint
requirements must be adhered to.

It is important to note that functional requirements can be directly implemented in software.


The non-functional requirements are controlled by other aspects of the system. For
example, in a computer system reliability is related to hardware failure rates, performance
controlled by CPU and memory. Non-functional requirements can in some cases be broken
into functional requirements for software. For example, a system level non-functional
safety requirement can be decomposed into one or more functional requirements. In
addition, a non-functional requirement may be converted into a process requirement when
the requirement is not easily measurable. For example, a system level maintainability
requirement may be decomposed into restrictions on software constructs or limits on lines
or code.

Requirements analysis

Requirements analysis in systems engineering and software engineering, consist of those


activities that go into determining the needs or conditions to meet for a new or altered
product, taking account of the possibly conflicting requirements of the various
stakeholders, such as beneficiaries or users.

Requirements analysis is critical to the success of a development project. Requirements


must be actionable, measurable, testable, related to identified business needs or
opportunities, and defined to a level of detail sufficient for system design.

20
The Need for Requirements Analysis
Studies reveal that insufficient attention to Software Requirements Analysis at the beginning
of a project is the major reason for critically weak projects that often do not fulfil basic tasks
for which they were designed. Software companies are now spending time and resources on
effective and streamlined Software Requirements Analysis
Processes as a condition to successful projects that support the customer‘s business goals and
meet the project‘s requirement specifications.
Requirements Analysis Process: Requirements Elicitation, Analysis And Specification
Requirements Analysis is the process of understanding the client needs and expectations
from a proposed system or application. It is a well-defined stage in the Software
Development Life Cycle model.
Requirements are a description of how a system should behave, in other words, a description
of system properties or attributes. Considering the numerous levels of dealings between
users, business processes and devices in worldwide corporations today, there are immediate
and composite requirements from a single application, from different levels within an
organization and outside it
The Software Requirements Analysis Process involves the complex task of eliciting and
documenting the requirements of all customers, modelling and analyzing these requirements
and documenting them as a foundation for system design.
This job (requirements analysis process) is dedicated to a specialized Requirements
Analyst. The Requirements Analysis function may also come under the scope of Project
Manager, Program Manager or Business Analyst, depending on the organizational
hierarchy.

Steps in the Requirements Analysis Process

Fix system boundaries


This is initial step and helps in identifying how the new application fit in into the business
processes, how it fits into the larger picture as well as its capacity and limitations.
Identify the customer
This focuses on identifying who the ‗users‘ or ‗customers‘ of an application are that is to
say knowing the group or groups of people who will be directly or indirectly impacted by
the new application. This allows the Requirements Analyst to know in advance where he
has to look for answers.

Requirements elicitation
Here information is gathered from the multiple stakeholders identified. The Requirements
Analyst brings out from each of these groups what their requirements from the application
are and what they expect the application to achieve. Taking into account the multiple
stakeholders involved, the list of requirements gathered in this manner could go into pages.
The level of detail of the requirements list depends on the number and size of user groups,
the degree of complexity of business processes and the size of the application.

21
Problems faced in Requirements Elicitation
Ambiguous understanding of processes
• Inconsistency within a single process by multiple users
• Insufficient input from stakeholders
• Conflicting stakeholder interests
• Changes in requirements after project has begun

Tools used in Requirements Elicitation


Tools used in Requirements Elicitation include stakeholder interviews and focus group
studies. Other methods like flowcharting of business processes and the use of existing
documentation like user manuals, organizational charts, process models and systems or
process specifications, on-site analysis, interviews with end-users, market research and
competitor analysis are also used widely in Requirements Elicitation.
There are of course, modern tools that are better equipped to handle the complex and
multilayered process of Requirements Elicitation. Some of the current Requirements
Elicitation tools in use are:
• Prototypes
• Use cases
• Data flow diagrams
• Transition process diagrams
• User interfaces

Requirements Analysis
The moment all stakeholder requirements have been gathered, a structured analysis of these
can be done after modeling the requirements. Some of the Software Requirements Analysis
techniques used are requirements animation, automated reasoning, knowledge- based
critiquing, consistency checking, analogical and case-based reasoning.

Requirements Specification
After requirements have been elicited, modeled and analyzed, they should be
documented in clear, definite terms. A written requirements document is crucial and as
such its circulation should be among all stakeholders including the client, user-groups, the
development and testing teams. It has been observed that a well-designed, clearly
documented Requirements Specification is vital and serves as a:
• Base for validating the stated requirements and resolving stakeholder conflicts, if any
• Contract between the client and development team
• Basis for systems design for the development team
• Bench-mark for project managers for planning project development lifecycle and goals
• Source for formulating test plans for QA and testing teams
• Resource for requirements management and requirements tracing
• Basis for evolving requirements over the project life span
Software requirements specification involves scoping the requirements so that it meets the
customer‘s vision. It is the result of teamwork between the end-user who is usually not a
technical expert, and a Technical/Systems Analyst, who is expected to approach the
situation in technical terms.

22
The software requirements specification is a document that lists out stakeholders‘ needs
and communicates these to the technical community that will design and build the system.
It is really a challenge to communicate a well-written requirements specification, to both
these groups and all the sub-groups within. To overcome this, Requirements
Specifications may be documented separately as:
• User Requirements - written in clear, precise language with plain text and use cases,
for the benefit of the customer and end-user
• System Requirements - expressed as a programming or mathematical model, meant to
address the Application Development Team and QA and Testing Team.
Requirements Specification serves as a starting point for software, hardware and database
design. It describes the function (Functional and Non-Functional specifications) of the
system, performance of the system and the operational and user-interface constraints that will
govern system development.

Requirements Management
Requirements Management is the all-inclusive process that includes all aspects of software
requirements analysis and as well ensures verification, validation and traceability of
requirements. Effective requirements management practices assure that all system
requirements are stated unmistakably, that omissions and errors are corrected and that
evolving specifications can be included later in the project lifecycle.

Design Phase

The software system design is formed from the results of the requirements phase. This is
where the details on how the system will work are produced. Deliverables in this phase
include hardware and software, communication, software design.

Definition of software design

A software design is a meaningful engineering representation of some software product


that is to be built. A design can be traced to the customer's requirements and can be
assessed for quality against predefined criteria. In the software engineering context,
design focuses on four major areas of concern: data, architecture, interfaces and
components.

The design process is very important. As a labourer, for example one would not attempt
to build a house without an approved blueprint so as not to risk the structural integrity
and customer satisfaction. In the same way, the approach to building software products is
no unlike. The emphasis in design is on quality. It is pertinent to note that, this is the only
phase in which the customer‘s requirements can be precisely translated into a finished
software product or system. As such, software design serves as the foundation for all
software engineering steps that follow regardless of which process model is being
employed.

During the design process the software specifications are changed into design models that
express the details of the data structures, system architecture, interface, and components.
Each design product is re-examined for quality before moving to the next phase of
software development. At the end of the design process a design specification document

23
is produced. This document is composed of the design models that describe the data,
architecture, interfaces and components.

Design Specification Models

• Data design – created by changing the analysis information model (data dictionary
and ERD) into data structures needed to implement the software. Part of the data
design may occur in combination with the design of software architecture. More
detailed data design occurs as each software component is designed.
• Architectural design - defines the relationships among the major structural
elements of the software, the ―design patterns‖ that can be used to attain the
requirements that have been defined for the system, and the constraint that affect the
way in which the architectural patterns can be applied. It is derived from the system
specification, the analysis model, and the subsystem interactions defined in the
analysis model (DFD).
• Interface design - explains how the software elements communicate with each
other, with other systems, and with human users. Much of the necessary information
required is provided by the e data flow and control flow diagrams.
• Component-level design – It converts the structural elements defined by the
software architecture into procedural descriptions of software components using
information acquired from the process specification (PSPEC), control specification
(CSPEC), and state transition diagram (STD).

Design Guidelines

In order to assess the quality of a design (representation) the yardstick for a good design
should be established. Such a design should:

• exhibit good architectural structure


• be modular
• contain distinct representations of data, architecture, interfaces, and components
(modules)
• lead to data structures that are appropriate for the objects to be implemented and be
drawn from recognizable design patterns
• lead to components that exhibit independent functional characteristics
• lead to interfaces that reduce the complexity of connections between modules and
with the external environment
• be derived using a reputable method that is driven by information obtained during
software requirements analysis

These criteria are not acquired by chance. The software design process promotes good design
through the application of fundamental design principles, systematic methodology and
through review.

Design Principles

Software design can be seen as both a process and a model.

―The design process is a series of steps that allow the designer to describe all aspects of
the software to be built. However, it is not merely a recipe book; for a competent and

24
successful design, the designer must use creative skill, past experience, a sense of what
makes ―good‖ software, and have a commitment to quality.
The set of principles which has been established to help the software engineer in directing
the design process are:

• The design process should not suffer from tunnel vision – Alternative
approaches should be considered by a good designer. Designer should judge
each approach based on the requirements of the problem, the resources
available to do the job and any other constraints.
• The design should be traceable to the analysis model – because a single
element of the design model often traces to multiple requirements, it is
necessary to have a means of tracking how the requirements have been
satisfied by the model
• The design should not reinvent the wheel – Systems are constructed using a
suite of design patterns, many of which may have likely been encountered
before. These patterns should always be chosen as an alternative to reinvention.
Design time should be spent in expressing truly fresh ideas and incorporating
those patterns that already exist.
• The design should reduce intellectual distance between the software and the
problem as it exists in the real world – This means that, the structure of the
software design should imitate the structure of the problem domain.
• The design should show uniformity and integration – a design is uniform if it
appears that one person developed the whole thing. Rules of style and format
should be defined for a design team before design work begins. A design is
integrated if care is taken in defining interfaces between design components.
• The design should be structured to degrade gently, even with bad data, events,
or operating conditions are encountered – Well-designed software should never
―bomb‖. It should be designed to accommodate unusual circumstances, and if
it must terminate processing, do so in a graceful manner.
• The design should be reviewed to minimize conceptual (semantic) errors –
there is sometimes the tendency to focus on minute details when the design is
reviewed, missing the forest for the trees. The designer team should ensure that
major conceptual elements of the design have been addressed before worrying
about the syntax if the design model.
• Design is not coding, coding is not design – Even when detailed designs are
created for program components, the level of abstraction of the design model is
higher than source code. The only design decisions made of the coding level
address the small implementation details that enable the procedural design to
be coded.
• The design should be structured to accommodate change
• The design should be assessed for quality as it is being created

With proper application of design principles, the design displays both external and
internal quality factors. External quality factors are those factors that can readily be
observed by the user, (e.g. speed, reliability, correctness, usability). Internal quality
factors have to do with technical quality more so the quality of the design itself. To
achieve internal quality factors the designer must understand basic design concepts.

Fundamental Software Design Concepts

Over the past four decades, a set of fundamental software design concepts has evolved,
each providing the software designer with a foundation from which more sophisticated
design methods can be applied. Each concept assists the soft ware engineer to answer the
following questions:
25
• What criteria can be used to partition software into individual components?

• How is function or data structure detail separated from a conceptual representation


of software?

• Are there uniform criteria that define the technical quality of a software design?

The fundamental design concepts are:

• Abstraction - allows designers to focus on solving a problem without being


concerned about irrelevant lower level details (procedural abstraction - named
sequence of events, data abstraction - named collection of data objects)
• Refinement - process of elaboration where the designer provides successively more
detail for each design component
• Modularity - the degree to which software can be understood by examining its
components independently of one another
• Software architecture - overall structure of the software components and the ways
in which that structure provides conceptual integrity for a system
• Control hierarchy or program structure - represents the module organization and
implies a control hierarchy, but does not represent the procedural aspects of the
software (e.g. event sequences)
• Structural partitioning - horizontal partitioning defines three partitions (input,
data transformations, and output); vertical partitioning (factoring) distributes
control in a top-down manner (control decisions in top level modules and
processing work in the lower level modules).
• Data structure - representation of the logical relationship among individual data
elements (requires at least as much attention as algorithm design)
• Software procedure - precise specification of processing (event sequences,
decision points, repetitive operations, data organization/structure)
• Information hiding - information (data and procedure) contained within a module
is inaccessible to modules that have no need for such information

Modularity

What is Modularity?

Modularity is a general systems concept which is the degree to which a system‘s


components may be separated and recombined. It refers to both the tightness of coupling
between components, and the degree to which the ―rules‖ of the system architecture
enable (or prohibit) the mixing and matching of components

The concept of modularity in computer software has been promoted for about five decades.
In essence, the software is divided into separately names and addressable components called
modules that are integrated to satisfy problem requirements. It is important to note that a
reader cannot easily understand large programs with a single module. The number of
variables, control paths and sheer complexity make understanding almost impossible. As a
result a modular approach will allow for the software to be intellectually manageable.
However, it is important to note that software cannot be subdivided indefinitely so as to
26
make the effort required to understand or develop it negligible. This is because the more the
number of modules, the less the effort to develop them.

Logical Modularity

Generally in software, modularity can be categorized as logical or physical. Logical


Modularity is concerned with the internal organization of code into logically-related
units. In modern high level languages, logical modularity usually starts with the class, the
smallest code group that can be defined. In languages such as Java and C#, classes can be
further combined into packages which allow developers to organize code into group of
related classes. Depending on the environment, a module can be implemented as a single
class, several classes in a package, or an entire API (a collection of packages). You should
be able to describe the functionality of tour module in a single sentence (i.e. this module
calculates tax per zip code) regardless of the implementation scale of your module,). Your
module should expose its functionality as simple interfaces that shield callers from all
implementation details. The functionality of a module should be accessible through a
published interface that allows the module to expose its functionalities to the outside
world while hiding its implementation details.

Physical Modularity

Physical Modularity is probably the earliest form of modularity introduced in software


creation. Physical modularity consists of two main components namely: (1) a file that
contains compiled code and other resources and (2) an executing environment that
understand how to execute the file. Developers build and assemble their modules into
compiled assets that can be distributed as single or multiple files. In Java for example, the
jar file is the unit of physical modularity for code distribution (.Net has the assembly). The
file and its associated meta-data are designed to be loaded and executed by the run time
environment that understands how to run the compiled code. Physical
modularity can also be affected by the context and scale of abstraction. Within Java, for
instance, the developer community has created and accept several physical modularity
strategies to address different aspects of enterprise development 1) WAR for web
components 2) EJB for distributed enterprise components 3) EAR for enterprise
application components 4) vendor specific modules such as JBoss Service Archive (SAR).
These are usually a variation of the JAR file format with special meta data to target the
intended runtime environment. The current trend of adoption seems to be pointing to
OSGi as a generic physical module format. OSGi provides the Java environment with
additional functionalties that should allow developers to model their modules to scale
from small emddeable to complex enterprise components (a lofty goal in deed).

Benefits of Modular Design

• Scalable Development: a modular design allows a project to be naturally


subdivided along the lines of its modules. A developer (or groups of developers)
can be assigned a module to implement independently which can produce an
asynchronous project flow.
• Testable Code Unit: when your code is partition into functionally-related chunks, it
facilitates the testing of each module independently. With the proper testing
framework, developers can exercise each module (and its constituencies) without
having to bring up the entire project.
• Build Robust System: in the monolithic software design, as your system grows in
complexity so does its propensity to be brittle (changes in one section causes failure
in another). Modularity lets you build complex system composed of smaller parts

27
that can be independently managed and maintained. Fixes in one portion of the
code does not necessarily affect the entire system.
• Easier Modification & Maintenance: post-production system maintenance is
another crucial benefit of modular design. Developers have the ability to fix and
make non-infrastructural changes to module without affecting other modules.
The updated module can independently go through the build and release cycle
without the need to re-build and redeploy the entire system.
• Functionally Scalable: depending on the level of sophistication of your modular
design, it's possible to introduce new functionalities with little or no change to
existing modules. This allows your software system to scale in functionality
without becoming brittle and a burden on developers.

Approaches of writing Modular program

The three basic approaches of designing Modular program are:

• Process-oriented design

This approach places the emphasis on the process with the objective being to design modules
that have high cohesion and low coupling. (Data flow analysis and data flow diagrams are
often used.)

• Data-oriented design

In this approach the data comes first. That is the structure of the data is determined first and
then procedures are designed in a way to fit to the structure of the data.

• Object-oriented design

In this approach, the objective is to first identify the objects and then build the product
around them. In concentrate, this technique is both data- and process-oriented.

Criteria for using Modular Design

• Modular decomposability – If the design method provides a systematic


means for breaking problem into sub problems, it will reduce the complexity
of the overall problem, thereby achieving a modular solution.
• Modular compos ability - If the design method enables existing (reusable)
design components to be assembled into a new system, it will yield a modular
solution that does not reinvent the wheel.
• Modular understand ability – If a module can be understood as a stand-
alone unit (without reference to other modules) it will be easier to build and
easier to change.
• Modular continuity – If small changes to the system requirements result in
changes to individual modules, rather than system-wide changes, the impact of
change-induced side-effects will be minimised
• Modular protection – If an abnormal condition occurs within a module and
its effects are constrained within that module, then impact of error-induced
side-effects are minimised

28
Attributes of a good Module

• Functional independence - modules have high cohesion and low coupling


• Cohesion - qualitative indication of the degree to which a module focuses on just one
thing
• Coupling - qualitative indication of the degree to which a module is connected to
other modules and to the outside world

Steps to Creating Effective Module

• Evaluate the first iteration of the program structure to reduce coupling and
improve cohesion. Once program structure has been developed modules may
be exploded or imploded with aim of improving module independence. o
An exploded module becomes two or more modules in the final program
structure.
o An imploded module is the result of combining the processing implied by two
or more modules.

An exploded module normally results when common processing exists in two or more
modules and can be redefined as a separate cohesive module. When high coupling is
expected, modules can sometimes be imploded to reduce passage of control, reference to
global data and interface complexity.

• Attempt to minimise structures with high fan-out; strive for fan-in as structure
depth increases. The structure shown inside the cloud in Fig. 3 does not make
effective use of factoring.

29
Fig 6 Example of a program structure

• Keep the scope of effect of a module within the scope of control for that
module. o The scope of effect of a module is defined as all other modules
that are affected by a decision made by that module. For example, the scope of
control of module e is all modules that are subordinate i.e. modules f, g, h, n, p
and q.

• Evaluate module interfaces to reduce complexity, reduce redundancy, and improve


consistency. o Module interface complexity is a prime cause of software errors.
Interfaces should be designed to pass information simply and should be consistent
with the function of a module. Interface inconsistency (i.e. seemingly unrelated
data passed via an argument list or other technique) is an indication of low
cohesion. The module in question should be re- evaluated.

• Define modules whose function is predictable and not overly restrictive (e.g. a
module that only implements a single task). o A module is predictable
when it can be treated as a black box; that is, the same external data will be
produced regardless of internal processing details. Modules that have internal
―memory‖ can be unpredictable unless care is taken in their use.
o A module that restricts processing to a single task exhibits high cohesion and
is viewed favourably by a designer.

• Strive for controlled entry modules, avoid pathological connection (e.g. branches
into the middle of another module) o This warns against content coupling.
Software is easier to understand and maintain if the module interfaces are
constrained and controlled.

Programming Languages that formally support module concept

Languages that formally support the module concept include IBM/360 Assembler,
COBOL, RPG and PL/1, Ada, D, F, Fortran, Haskell, OCaml, Pascal, ML, Modula-2,
Erlang, Perl, Python and Ruby. The IBM System i also uses Modules in RPG, COBOL and
CL, when programming in the ILE environment. Modular programming can be performed
even where the programming language lacks explicit syntactic features to support named
modules.

Software tools can create modular code units from groups of components. Libraries of
components built from separately compiled modules can be combined into a whole by using
a linker.

Module Interconnection Languages

Module interconnection languages (MILs) provide formal grammar constructs for


deciding the various module interconnection specifications required to assemble a
complete software system. MILs enable the separation between programming-in-the-
small and programming-in-the-large. Coding a module represents programming in the

30
small, while assembling a system with the help of a MIL represents programming in the
large. An example of MIL is MIL-75.

Top-Down Design

Top-down is a programming style, the core of traditional procedural languages, in which


design begins by specifying complex pieces and then dividing them into successively
smaller pieces. Finally, the components are precise enough to be coded and the program
is written. It is the exact opposite of the bottom-up programming approach which is
common in object-oriented languages such as C++ or Java.

The method of writing a program using top-down approach is to write a main procedure
that names all the major functions it will need. After that the programming team
examines the requirements of each of those functions and repeats the process. These
compartmentalized sub-routines finally will perform actions so straightforward they can
be easily and concisely coded. The program is done when all the various sub-routines
have been coded.

Merits of top-down programming:

• Separating the low level work from the higher level abstractions leads to a modular
design.
• Modular design means development can be self contained.
• Having "skeleton" code illustrates clearly how low level modules integrate.
Fewer operations errors
• Much less time consuming (each programmer is only concerned in a part of the big
project).
• Very optimized way of processing (each programmer has to apply their own
knowledge and experience to their parts (modules), so the project will become an
optimized one).
• Easy to maintain (if an error occurs in the output, it is easy to identify the errors
generated from which module of the entire program).

Bottom-up approach

In a bottom-up approach the individual base elements of the system are first specified in
great detail. These elements are then connected together to form bigger subsystems,
which are linked, sometimes in many levels, until a complete top-level system is formed.
This strategy often resembles a "seed" model, whereby the beginnings are small, but
eventually grow in complexity and completeness.

Object-oriented programming (OOP) is a programming paradigm that uses "objects" to


design applications and computer programs.

. This bottom-up approach has one drawback. We need to use a lot of perception to decide
the functionality that is to be provided by the module. This approach is more suitable if a
system is to be developed from existing system, because it starts from some existing
modules. Modern software design approaches usually mix both top-down and bottom-up
approaches.

31
Pseudo code

Definition of Pseudo code


Pseudo-code is a non-formal language, a way to create a logical structure, describing the
actions, which will be executed by the application. Using pseudo-code, the developer shows
the application logic using his local language, without applying the structural rules of a
specific programming language. The big advantage of the pseudo-code is that the
application logic can be easily comprehended by any developer in the development team.
In addition, when the application algorithm is expressed in pseudo-code, it is very easy to
convert the pseudo-code into real code (using any programming language).

General guidelines for writing Pseudo code

Here are a few general guidelines for writing your pseudo code:
Mimic good code and good English. Using aspects of both systems means adhering
to the style rules of both to
some degree. It is still important that variable names be mnemonic, comments
be included where useful, and English phrases be comprehensible (full
sentences are usually not necessary).
Ignore unnecessary details. If you are worrying about the placement of commas,
you are using too much detail. It is a
good idea to use some convention to group statements (begin/end, brackets, or
whatever else is clear), but you shouldn't obsess about syntax.
Don't belabor the obvious. In many cases, the type of a variable is clear from
context; unless it is critical that it is specified to be an integer or real, it is often
unnecessary to make it explicit.
Take advantage of programming shorthands. Using if-then-else or looping
structures is more concise than writing
out the equivalent in English; general constructs that are not peculiar to a small
number of languages are good candidates
for use in pseudocode. Using parameters in specifying procedures is concise,
clear, and accurate, and hence should not be omitted from pseudocode.
Consider the context. If you are writing an algorithm for quicksort, the statement
use quicksort to sort the values is
hiding too much detail; if you have already studied quicksort in a class and later
use it as a subroutine in another
algorithm, the statement would be appropriate to use.
Don't lose sight of the underlying model. It should be possible to see through"
your pseudocode to the model below; if not (that is, you are not able to analyze
the algorithm easily), it is written at too high a level.
Check for balance. If the pseudocode is hard for a person to read or difficult to
translate into working code (or worse yet, both!), then something is wrong with
the level of detail you have chosen to use.

Examples of Pseudocode

Example 1 - Computing Sales Value Added (VAT) Tax : Pseudo-code the task of computing
the final price of an item after figuring in sales tax. Note the three types of instructions:
input (get), process/calculate (=) and output (display)

1. get price of item


2. get VAT rate

32
3. VAT = price of time times VAT rate 4 final price = price of
item plus VAT
5. display final price
6. stop

Variables: price of item, sales tax rate, sales tax, final price

Note that the operations are numbered and each operation is unambiguous and effectively
computable. We also extract and list all variables used in our pseudo-code. This will be
useful when translating pseudo-code into a programming language

Example 2 - Computing Weekly Wages: Gross pay depends on the pay rate and the
number of hours worked per week. However, if you work more than 50 hours, you get
paid time-and-a-half for all hours worked over 50. Pseudo-code the task of computing
gross pay given pay rate and hours worked.

1. get hours worked


2. get pay rate
3. if hours worked ≤ 50 then
3.1 gross pay = pay rate times hours worked
4. else
4.1 gross pay = pay rate times 50 plus 1.5 times pay rate times (hours
worked minus 50)
5. display gross pay
6. halt
variables: hours worked, ray rate, gross pay

This example presents the conditional control structure. On the basis of the true/false
question asked in line 3, line 3.1 is executed if the answer is True; otherwise if the answer
is False the lines subordinate to line 4 (i.e. line 4.1) is executed. In both cases pseudo-
code is resumed at line 5.

Example 3 - Computing a Question Average: Pseudo-code a routine to calculate your


question average.

1. get number of questions


2. sum = 0
3. count = 0
4. while count < number of questions
4.1 get question grade
4.2 sum = sum + question grade
4.3 count = count + 1
5. average = sum / number of question
6. display average
7. stop

variables: number of question, sum ,count, question grade, average

33
This example presents an iterative control statement. As long as the condition in line 4 is
True, we execute the subordinate operations 4.1 - 4.3. When the condition is False, we return
to the pseudo-code at line 5.

This is an example of a top-test or while do iterative control structure. There is also a bottom-
test or repeat until iterative control structure which executes a block of statements until the
condition tested at the end of the block is False.

Some Keywords That Should be Used

For looping and selection, The keywords that are to be used include Do While...EndDo; Do
Until...Enddo; Case...EndCase; If...Endif; Call ... with (parameters); Call; Return ..... ;
Return; When; Always use scope terminators for loops and iteration.

As verbs, use the words Generate, Compute, Process, etc. Words such as set, reset,
increment, compute, calculate, add, sum, multiply, .....print, display, input, output, edit,
test , etc. with careful indentation tend to foster desirable pseudocode.
Do not include data declarations in your pseudo code.

Programming Environment, CASE Tools & HIPO Diagrams

Definition of Programming Environment

Programming environments gives the basic tools and Application Programming Interfaces,
or APIs, necessary to construct programs. Programming environments help the creation,
modification, execution and debugging of programs. The goal of integrating a programming
environment is more than simply building tools that share a common data base and provide
a consistent user interface. Altogether, the programming environment appears to the
programmer as a single tool; there are no firewalls separating the various functions provided
by the environment.

History of Programming Environment

The history of software tools began with the first computers in the early 1950s that used
linkers, loaders, and control programs. In the early 1970s the tools became famous with Unix
with tools like grep, awk and make that were meant to be combined flexibly with pipes. The
term "software tools" came from the book of the same name by Brian Kernighan and P. J.
Plauger. Originally, Tools were simple and light weight. As some tools have been
maintained, they have been integrated into more powerful integrated development
environments (IDEs). These environments combine functionality into one place, sometimes
increasing simplicity and productivity, other times part with flexibility and extensibility. The
workflow of IDEs is routinely contrasted with alternative approaches, such as the use of
Unix shell tools with text editors like Vim and Emacs.

The difference between tools and applications is unclear. For example, developers use simple
databases (such as a file containing a list of important values) all the time as tools. However a
full-blown database is usually thought of as an application in its own right.

For many years, computer-assisted software engineering (CASE) tools were preferred. CASE
tools emphasized design and architecture support, such as for UML. But the most successful
of these tools are IDEs.

34
The ability to use a variety of tools productively is one quality of a skilled software engineer.

Types of Programming Environment

Software development tools can be roughly divided into the following categories:

• performance analysis tools


• debugging tools
• static analysis and formal verification tools
• correctness checking tools
• memory usage tools
• application build tools
• integrated development environment

Forms of Software tools

Software tools come in many forms namely :

• Bug Databases: Bugzilla, Trac, Atlassian Jira, LibreSource, SharpForge


• Build Tools: Make, automake, Apache Ant, SCons, Rake, Flowtracer, cmake,
qmake
• Code coverage: C++test,GCT, Insure++, Jtest, CCover
• Code Sharing Sites: Freshmeat, Krugle, Sourceforge. See also Code search engines.
• Compilation and linking tools: GNU toolchain, gcc, Microsoft Visual Studio,
CodeWarrior, Xcode, ICC

• Debuggers: gdb, GNU Binutils, valgrind. Debugging tools also are used in the
process of debugging code, and can also be used to create code that is more compliant
to standards and portable than if they were not used.

• Disassemblers: Generally reverse-engineering tools.


• Documentation generators: Doxygen, help2man, POD, Javadoc, Pydoc/Epydoc,
asciidoc
• Formal methods: Mathematically-based techniques for specification, development
and verification
• GUI interface generators
• Library interface generators: Swig Integration Tools

• Memory Use/Leaks/Corruptions Detection: dmalloc, Electric Fence, duma, Insure


++. Memory leak detection: In the C programming language for instance, memory
leaks are not as easily detected - software tools called memory debuggers are often
used to find memory leaks enabling the programmer to find these problems much
more efficiently than inspection alone.
35
• Parser generators: Lex, Yacc
• Performance analysis or profiling
• Refactoring Browser
• Revision control: Bazaar, Bitkeeper, Bonsai, ClearCase, CVS, Git, GNU arch,
Mercurial, Monotone, Perforce, PVCS, RCS, SCM, SCCS, SourceSafe, SVN,
LibreSource Synchronizer
• Scripting languages: Awk, Perl, Python, REXX, Ruby, Shell, Tcl
• Search: grep, find
• Source-Code Clones/Duplications Finding
• Source code formatting
• Source code generation tools
• Static code analysis: C++test, Jtest, lint, Splint, PMD, Findbugs, .TEST
• Text editors: emacs, vi, vim

3.4 Integrated development environments

Integrated development environments (IDEs) merge the features of many tools into one
complete package. They are usually simpler and make it easier to do simple tasks, such as
searching for content only in files in a particular project. IDEs are often used for
development of enterprise-level applications.Some examples of IDEs are:

• Delphi
• C++ Builder (CodeGear)
• Microsoft Visual Studio
• EiffelStudio
• GNAT Programming Studio
• Xcode
• IBM Rational Application Developer
• Eclipse
• NetBeans
• IntelliJ IDEA WinDev
• Code::Blocks Lazarus

What is CASE Tools?

CASE tools are a class of software that automates many of the activities involved in
various life cycle phases. For example, when establishing the functional requirements of
a proposed application, prototyping tools can be used to develop graphic models of
application screens to assist end users to visualize how an application will look after
development. Subsequently, system designers can use automated design tools to
transform the prototyped functional requirements into detailed design documents.
Programmers can then use automated code generators to convert the design documents
36
into code. Automated tools can be used collectively, as mentioned, or individually. For
example, prototyping tools could be used to define application requirements that get
passed to design technicians who convert the requirements into detailed designs in a
traditional manner using flowcharts and narrative documents, without the assistance of
automated design software.

It is the scientific application of a set of tools and methods to a software system which is
meant to result in high-quality, defect-free, and maintainable software products. It also
refers to methods for the development of information systems together with automated
tools that can be used in the software development process.

Types of CASE Tools

Some typical CASE tools are:

• Configuration management tools


• Data modeling tools
• Model transformation tools
• Program transformation tools
• Refactoring tools
• Source code generation tools, and
• Unified Modeling Language

Many CASE tools not only yield code but also generate other output typical of various
systems analysis and design methodologies such as:

• data flow diagram


• entity relationship diagram
• logical schema
• Program specification
• SSADM.
• User documentation

History of CASE

The term CASE was originally formulated by software company, Nastec Corporation of
Southfield, Michigan in 1982 with their original integrated graphics and text editor
GraphiText, which also was the first microcomputer-based system to use hyperlinks to
cross-reference text strings in documents Under the direction of Albert F. Case, Jr. vice
president for product management and consulting, and Vaughn Frick, director of product
management, the DesignAid product suite was expanded to support analysis of a wide range
of structured analysis and design methodologies, notable Ed Yourdon and Tom DeMarco,
Chris Gane & Trish Sarson, Ward-Mellor (real-time) SA/SD and Warnier-Orr (data driven).

The next competitor into the market was Excelerator from Index Technology in
Cambridge, Mass. While DesignAid ran on Convergent Technologies and later
Burroughs Ngen networked microcomputers, Index launched Excelerator on the IBM PC/
AT platform. While, at the time of launch, and for several years, the IBM platform did not
support networking or a centralized database as did the Convergent Technologies or
Burroughs machines, the allure of IBM was strong, and Excelerator came to prominence.
37
Hot on the heels of Excelerator were a rash of offerings from companies such as
Knowledgeware (James Martin, Fran Tarkenton and Don Addington), Texas Instrument's
IEF and Accenture's FOUNDATION toolset (METHOD/1, DESIGN/1, INSTALL/1, FCP).
CASE tools were at their peak in the early 1990s. At the time IBM had proposed AD/Cycle
which was an alliance of software vendors centered around IBM's Software repository using
IBM DB2 in mainframe and OS/2:

The application development tools can be from several sources: from IBM, from vendors,
and from the customers themselves. IBM has entered into relationships with Bachman
Information Systems, Index Technology Corporation, and Knowledgeware, Inc. wherein
selected products from these vendors will be marketed through an IBM complementary
marketing program to provide offerings that will help to achieve complete life-cycle
coverage.

With the decline of the mainframe, AD/Cycle and the Big CASE tools died off, opening
the market for the mainstream CASE tools of today. Interestingly, nearly all of the leaders
of the CASE market of the early 1990s ended up being purchased by Computer
Associates, including IEW, IEF, ADW, Cayenne, and Learmonth & Burchett
Management Systems (LBMS).

Categories of Case Tools

CASE Tools can be classified into 3 categories:

• Tools support only specific tasks in the software process.


• Workbenches support only one or a few activities.
• Environments support (a large part of) the software process.

Workbenches and environments are generally built as collections of tools. Tools can
therefore be either stand alone products or components of workbenches and
environments.

CASE Environment

An environment is a collection of CASE tools and workbenches that supports the software
process. CASE environments are classified based on the focus/basis of integration

• Toolkits
• Language-centered
• Integrated
• Fourth generation
• Process-centered

Toolkits
Toolkits are loosely integrated collections of products easily extended by aggregating
different tools and workbenches. Typically, the support provided by a toolkit is limited to
programming, configuration management and project management. And the toolkit itself is
environments extended from basic sets of operating system tools, for example, the Unix
Programmer's Work Bench and the VMS VAX Set. In addition, toolkits' loose integration
requires user to activate tools by explicit invocation or simple control mechanisms. The
resulting files are unstructured and could be in different format, therefore the access of file
from different tools may require explicit file format conversion. However, since the only
38
constraint for adding a new component is the formats of the files, toolkits can be easily and
incrementally extended.

Language-centered
The environment itself is written in the programming language for which it was developed,
thus enable users to reuse, customize and extend the environment. Integration of code in
different languages is a major issue for language-centered environments. Lack of process
and data integration is also a problem. The strengths of these environments include good
level of presentation and control integration. Interlisp, Smalltalk, Rational, and KEE are
examples of language-centered environments.

Integrated
These environments achieve presentation integration by providing uniform, consistent, and
coherent tool and workbench interfaces. Data integration is achieved through the
repository concept: they have a specialized database managing all information produced
and accessed in the environment. Examples of integrated environment are IBM AD/Cycle
and DEC Cohesion.

Fourth generation
Forth generation environments were the first integrated environments. They are sets of
tools and workbenches supporting the development of a specific class of program:
electronic data processing and business-oriented applications. In general, they include
programming tools, simple configuration management tools, document handling facilities
and, sometimes, a code generator to produce code in lower level languages. Informix
4GL, and Focus fall into this category.

Process-centered
Environments in this category focus on process integration with other integration dimensions
as starting points. A process-centered environment operates by interpreting a process model
created by specialized tools. They usually consist of tools handling two functions:

• Process-model execution, and


• Process-model production
Examples are East, Enterprise II, Process Wise, Process Weaver, and Arcadia.[6]

Application areas of CASE Tools

All aspects of the software development life cycle can be supported by software tools,
and so the use of tools from across the spectrum can, arguably, be described as CASE;
from project management software through tools for business and functional analysis,
system design, code storage, compilers, translation tools, test software, and so on.

However, it is the tools that are concerned with analysis and design, and with using
design information to create parts (or all) of the software product, that are most
frequently thought of as CASE tools. CASE applied, for instance, to a database software
product, might normally involve:

• Modeling business/real world processes and data flow


• Development of data models in the form of entity-relationship diagrams
• Development of process and function descriptions
• Production of database creation SQL and stored procedures

39
CASE Risk

Common CASE risks and associated controls include:

• Inadequate Standardization: Linking CASE tools from different vendors (design


tool from Company X, programming tool from Company Y) may be difficult if the
products do not use standardized code structures and data classifications. File
formats can be converted, but usually not economically. Controls include using tools
from the same vendor, or using tools based on standard protocols and insisting on
demonstrated compatibility. Additionally, if organizations obtain tools for only a
portion of the development process, they should consider acquiring them from a
vendor that has a full line of products to ensure future compatibility if they add more
tools.

• Unrealistic Expectations: Organizations often implement CASE technologies to


reduce development costs. Implementing CASE strategies usually involves high
start-up costs. Generally, management must be willing to accept a long-term
payback period. Controls include requiring senior managers to define their purpose
and strategies for implementing CASE technologies.

• Quick Implementation: Implementing CASE technologies can involve a significant


change from traditional development environments. Typically, organizations should
not use CASE tools the first time on critical projects or projects with short deadlines
because of the lengthy training process. Additionally, organizations should consider
using the tools on smaller, less complex projects and gradually implementing the
tools to allow more training time.
• Weak Repository Controls : Failure to adequately control access to CASE
repositories may result in security breaches or damage to the work documents,
system designs, or code modules stored in the repository. Controls include
protecting the repositories with appropriate access, version, and backup controls.

HIPO Diagrams

The HIPO (Hierarchy plus Input-Process-Output) technique is a tool for planning and/or
documenting a computer program. A HIPO model consists of a hierarchy chart that
graphically represents the program‘s control structure and a set of IPO (Input-Process-
Output) charts that describe the inputs to, the outputs from, and the functions (or processes)
performed by each module on the hierarchy chart.

Strengths, weaknesses, and limitations

Using the HIPO technique, designers can evaluate and refine a program‘s design, and
correct flaws prior to implementation. Given the graphic nature of HIPO, users and
managers can easily follow a program‘s structure. The hierarchy chart serves as a useful
planning and visualization document for managing the program development process. The
IPO charts define for the programmer each module‘s inputs, outputs, and algorithms.

In theory, HIPO provides valuable long-term documentation. However, the ―text plus
flowchart‖ nature of the IPO charts makes them difficult to maintain, so the documentation
often does not represent the current state of the program.

40
By its very nature, the HIPO technique is best used to plan and/or document a hierarchically
structured program.

The HIPO technique is often used to plan or document a structured program A variety of
tools, including pseudocode (and structured English can be used to describe processes on
an IPO chart. System flowcharting symbols are sometimes used to identify physical
input, output, and storage devices on an IPO chart.

Components of HIPO

A completed HIPO package has two parts. A hierarchy chart is used to represent the top-
down structure of the program. For each module depicted on the hierarchy chart, an IPO
(Input-Process-Output) chart is used to describe the inputs to, the outputs from, and the
process performed by the module.

The hierarchy chart

It summarises the primary tasks to be performed by an interactive inventory program. Figure


7 shows one possible hierarchy chart (or visual table of contents) for that program. Each
box represents one module that can call its subordinates and return control to its higher-
level parent.

A Set of Tasks to Be Performed by an Interactive Inventory Program is:

• Manage inventory
• Update stock
• Process sale
• Process return
• Process shipment
• Generate report
• Respond to query
• Display status report
• Maintain inventory data
• Modify record
• Add record
• Delete record

41
Figure 7 A hierarchy chart for an interactive inventory control program.

Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html

At the top of Figure 7 is the main control module, Manage inventory (module 1.0). It
accepts a transaction, determines the transaction type, and calls one of its three
subordinates (modules 2.0, 3.0, and 4.0).

Lower-level modules are identified relative to their parent modules; for example, modules
2.1, 2.2, and 2.3 are subordinates of module 2.0, modules 2.1.1, 2.1.2, and 2.1.3 are
subordinates of 2.1, and so on. The module names consist of an active verb followed by a
subject that suggests the module‘s function.

The objective of the module identifiers is to uniquely identify each module and to
indicate its place in the hierarchy. Some designers use Roman numerals (level I, level II)
or letters (level A, level B) to designate levels. Others prefer a hierarchical numbering
scheme; e.g., 1.0 for the first level; 1.1, 1.2, 1.3 for the second level; and so on. The key
is consistency.

The box at the lower-left of Figure 7 is a legend that explains how the arrows on the
hierarchy chart and the IPO charts are to be interpreted. By default, a wide clear arrow
represents a data flow, a wide black arrow represents a control flow, and a narrow arrow
indicates a pointer.

The IPO charts

42
An IPO chart is prepared to document each of the modules on the hierarchy chart.

Overview diagrams

An overview diagram is a high-level IPO chart that summarizes the inputs to, processes or
tasks performed by, and outputs from a module. For example, shows an overview diagram
for process 2.0, Update stock. Where appropriate, system flowcharting symbols are used to
identify the physical devices that generate the inputs and accept the outputs. The processes
are typically described in brief paragraph or sentence form. Arrows show the primary input
and output data flows.

Figure 7.1 An overview diagram for process 2.0.

Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html

Overview diagrams are primarily planning tools. They often do not appear in the completed
documentation package.

Detail diagrams

A detail diagram is a low-level IPO chart that shows how specific input and output data
elements or data structures are linked to specific processes. In effect, the designer
integrates a system flowchart into the overview diagram to show the flow of data and control
through the module.

Figure 7.2 shows a detail diagram for module 2.0, Update stock. The process steps are
written in pseudocode. Note that the first step writes a menu to the user screen and input

43
data (the transaction type) flows from that screen to step 2. Step 3 is a case structure. Step
4 writes a transaction complete message to the user screen.

The solid black arrows at the top and bottom of the process box show that control flows from
module 1.0 and, upon completion, returns to module 1.0. Within the case structure (step 3)
are other solid black arrows.

Following case 0 is a return (to module 1.0). The two-headed black arrows following
cases 1, 2, and 3 represent subroutine calls; the off-page connector symbols (the little
home plates) identify each subroutine‘s module number. Note that each subroutine is
documented in a separate IPO chart. Following the default case, the arrow points to an
on-page connector symbol numbered 1. Note the matching on-page connector symbol
pointing to the select structure. On-page connectors are also used to avoid crossing
arrows on data flows.

Figure 7.2 A detail diagram for process 2.1.


Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html

Often, detailed notes and explanations are written on an extended description that is
attached to each detail diagram. The notes might specify access methods, data types, and
so on.

Figure 64.4 shows a detail diagram for process 2.1. The module writes a template to the
user screen, reads a stock number and a quantity from the screen, uses the stock number
as a key to access an inventory file, and updates the stock on hand. Note that the logic

44
repeats the data entry process if the stock number does not match an inventory record. A
real IPO chart is likely to show the error response process in greater detail.

Simplified IPO charts

Some designers simplify the IPO charts by eliminating the arrows and system flowchart
symbols and showing only the text. Often, the input and out put blocks are moved above
the process block (Figure 64.5), yielding a form that fits better on a standard 8.5 × 11
(portrait orientation) sheet of paper. Some programmers insert modified IPO charts
similar to Figure 64.5 directly into their source code as comments. Because the
documentation is closely linked to the code, it is often more reliable than stand-alone
HIPO documentation, and more likely to be maintained.

Fig 7.3 Simplified HIPO diaram

Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html

45
Detail diagram —
A low-level IPO chart that shows how specific input and output data elements or data
structures are linked to specific processes.
Hierarchy chart —
A diagram that graphically represents a program‘s control structure.
HIPO (Hierarchy plus Input-Process-Output) —
A tool for planning and/or documenting a computer program that utilizes a hierarchy
chart to graphically represent the program‘s control structure and a set of IPO
(Input-Process-Output) charts to describe the inputs to, the outputs from, and the
functions performed by each module on the hierarchy chart.
IPO (Input-Process-Output) chart —
A chart that describes or documents the inputs to, the outputs from, and the
functions (or processes) performed by a program module.
Overview diagram —
A high-level IPO chart that summarizes the inputs to, processes or tasks
performed by, and outputs from a module.
Visual Table of Contents (VTOC) —
A more formal name for a hierarchy chart.

Software

In the 1970s and early 1980s, HIPO documentation was typically prepared by hand using a
template. Some CASE products and charting programs include HIPO support. Some forms
generation programs can be used to generate HIPO forms. The examples in this # were
prepared using Visio.

Implementation and Testing

What is Implementation?

Code is formed from the deliverables of the design phase during implementation. It is the
longest phase of the software development life cycle. Since code is produce here, the
developer regards this phase as the main focus of the life cycle. Implementation my
overlap with both the design and testing phases. As we learnt in previous unit many tools
exists (CASE tools) to actually automate the production of code using information gathered
and produced during the design phase. The implementation phase concerns with issues of
quality, performance, baselines, libraries, and debugging. The end deliverable is the
product itself.

The Implementation Phase

Phase Deliverable
Implementation Code
Critical Error
Removal
Table 1 The Implementation Phase

Source: Ronald LeRoi Burback


1998-12-14

46
Critical Error Removal

There are three kinds of errors in a system, namely critical errors, non-critical errors, and
unknown errors.

A critical error prevents the system from fully satisfying its usage. The errors have to
be corrected before the system can be given to a customer or even before future
development can progress.

A non-critical error is known but the occurrence of the error does not notably affect the
system's expected quality. There may indeed be many known errors in the system. They are
usually listed in the release notes and have well established work arounds.

Actually, the system is likely to have many, yet-to-be-discovered errors. The outcome of
these errors is unknown. Some may become critical while some may be simply fixed by
patches or fixed in the next release of the system.

Application of Six Sigma to Software Implementation Projects


Software implementation can be a demanding project. When a company is attempting new
software integration, it can be hectic Six Sigma is a management approach meant to
discover and control defects. A summary of Six Sigma can be found in Natasha Baker‘s
―Key Concepts of Six Sigma.‖ The technique consists of five steps:
• · Define
• · Measure
• · Analyze
• · Improve
• · Control

Defining the Implementation

By defining the goals, projects, and deliverables your company will have greater direction
during the changeover. The goals and projects must be measurable. The following
questions, for examples, may be necessary: Is it your goal to have 25% of your staff
comfortable enough to train the remaining staff? Do you want full implementation of the
software by March? By utilizing Six Sigma metrics careful monitoring of team
productivity and implementation success is possible.

2. Measurement of the Implementation

Goals and projects must be usable with metrics. By using Six Sigma measurement
methods, it is possible to follow user understanding, familiarity, and progress accurately.
It should be noted that, continuous data is more useful than discrete data. This is because
it gives a better implementation success rate overview.

3. Implementation Analysis

Analysis is important to tackle defects occurrence. The Six Sigma method examines essential
relationships and ensures all factors are considered. For example, in a software

47
implementation trial, employees are frustrated and confused by new processes. Careful
analysis will look at the reasons behind the confusion.

4. Implementation Improvement
After analysis, it is important to look at how the implementation could improve. In the
example utilizing the team members, perhaps utilizing proficient resources to mentor
struggling resources will help. Six Sigma improvements depends upon experimental
design and carefully constructed analysis of data in order to keep further defects in the
implementation process at bay.

5. Control of the Implementation

If implementation is going to be successful, control is important. It involves consistent


monitoring for proficiency. This ensures that the implementation does not fail. Any
deviations from the goals set demand correcting before they become defects. For example,
if you notice your team does not adapt quickly enough, you need to identify the causes
before the deadline. By carefully monitoring the implementation process this way, will
minimise the defect. The two most important features of software implementation using Six
Sigma are setting measurable goals and employing metrics in order to maximize
improvement and minimize the chance of defects in the new process.

Major Tasks in Implementation


This part provides a brief description of each major task needed for the
implementation of the system. The tasks described here are not particular to site but
overall project tasks that are needed to install hardware and software, prepare data,
and verify the system. Include the following information for the description of each
major task, if appropriate:
Add as many subsections as necessary to this section to describe all the major tasks
adequately. The tasks described in this section are not site-specific, but generic or overall
project tasks that are required to install hardware and software, prepare data, and verify
the system.

Examples of major tasks are the following:


• Providing overall planning and coordination for the implementation
• Providing appropriate training for personnel
• Ensuring that all manuals applicable to the implementation effort are available
when needed
• Providing all needed technical assistance
• Scheduling any special computer processing required for the implementation
• Performing site surveys before implementation
• Ensuring that all prerequisites have been fulfilled before the implementation
date
• Providing personnel for the implementation team
• Acquiring special hardware or software
• Performing data conversion before loading data into the system
• Preparing site facilities for implementation

Major Requirement in Implementation

Security
If suitable for the system to be implemented, there is need to include an overview
of the system security features and requirements during the implementation.

48
System Security Features
It is pertinent to discuss the security features that will be associated with the system
when it is implemented. It should include the primary security features associated
with the system hardware and software. Security and protection of sensitive bureau
data and information should be discussed.

Security during Implementation


This part addresses security issues particularly related to the implementation effort.
It will be necessary to consider for example, if LAN servers or workstations will he
installed at a site with sensitive data preloaded on non- removable hard disk drives.
It will also be important to see to how security would be provided for the data on
these devices during shipping, transport, and installation so as not to allow theft of
the devices to compromise the sensitive data.

Implementation Support
This part describes the support such as: software, materials, equipment, and facilities
necessary for the implementation, as well as the personnel requirements and training
essential for the implementation.

Hardware, Software, Facilities, and Materials


This section, provides a list of support software, materials, equipment, and facilities
required for the implementation..

Hardware
This section offers a list of support equipment and includes all hardware used for
testing time implementation. For example, if a client/server database is
implemented on a LAN, a network monitor or ―sniffer‖ might be used, along with
test programs. to determine the performance of the database and LAN at high-
utilization rates

Software
This section provides a list of software and databases required to support the
implementation. Identify the software by name, code, or acronym. Identify which
software is commercial off-the-shelf and which is State-specific. Identify any
software used to facilitate the implementation process.

Facilities
This section identifies the physical facilities and accommodations required during
implementation. Examples include physical workspace for assembling and testing
hardware components, desk space for software installers, and classroom space for
training the implementation stall. Specify the hours per day needed, number of
days, and anticipated dates.

Material
This section provides a list of required support materials, such as magnetic tapes
and disk packs.

Personnel
This section describes personnel requirements and any known or proposed staffing
requirements. It also describes the training, to be provided for the implementation
staff.
49
Personnel Requirements and Staffing
This section, describes the number of personnel, length of time needed, types of
skills, and skill levels for the staff required during the implementation period. If
particular staff members have been selected or proposed for the implementation,
identify them and their roles in the implementation.

Training of Implementation Staff


This section addresses the training, necessary to prepare staff for implementing
and maintaining the system; it does not address user training, which is the subject
of the Training Plan. It also describes the type and amount of training required for
each of the following areas, if appropriate, for the system:

• System hardware/software installation


System support
• System maintenance and modification

Present a training curriculum listing the courses that will be provided, a course
sequence and a proposed schedule. If appropriate, identify which courses
particular types of staff should attend by job position description.

If training will be provided by one or more commercial vendors, identify them, the
course name(s), and a brief description of the course content.

If the training will be provided by State staff, provide the course name(s) and an
outline of the content of each course. Identify the resources, support materials,
and proposed instructors required to teach the course(s).
Performance Monitoring
This section describes the performance monitoring tool and techniques and how it will
be used to help decide if the implementation is successful.

Configuration Management Interface


This section describes the interactions required with the Configuration Management
(CM) representative on CM-related issues, such as when software listings will be
distributed, and how to confirm that libraries have been moved from the development
to the production environment.

Implementation Requirements by Site


This section describes specific implementation requirements and procedures. If these
requirements and procedures differ by site, repeat these subsections for each site; if
they are the same for each site, or if there is only one implementation site, use these
subsections only once. The ―X‖ in the subsection number should be replaced with a
sequenced number beginning with I. Each subsection with the same value of ―X‖ is
associated with the same implementation site. If a complete set of subsections will be
associated with each implementation site, then ―X‖ is assigned a new value for each
site.

Site Name or identification for Site X


This section provides the name of the specific site or sites to be discussed in the
subsequent sections.

50
Site Requirements
This section defines the requirements that must he met for the orderly
implementation of the system and describes the hardware, software, and site-
specific facilities requirements for this area.

Any site requirements that do not fall into the following three categories and were
not described in Section 3, Implementation Support, may be described in this
section, or other subsections may be added following Facilities Requirements
below:

• Hardware Requirements - Describe the site-specific hardware requirements


necessary to support the implementation (such as. LAN hardware for a
client/server database designed to run on a LAN).

• Software Requirements - Describe any software required to implement the system


(such as, software specifically designed for automating the installation process).

• Data Requirements - Describe specific data preparation requirements and data


that must be available for the system implementation. An example would be the
assignment of individual IDs associated with data preparation.

• Facilities Requirements - Describe the site-specific physical facilities and


accommodations required during the system implementation period. Some
examples of this type of information are provided in Section 3.

Site implementation Details


This section addresses the specifics of the implementation for this site. Include a
description of the implementation team, schedule, procedures, and database and
data updates. This section should also provide information on the following:

• Team--If an implementation team is required, describe its composition and the


tasks to be performed at this site by each team member.

• Schedule--Provide a schedule of activities, including planning and preparation,


to be accomplished during implementation at this site. Describe the required
tasks in chronological order with the beginning and end dates of each task. If
appropriate, charts and graphics may be used to present the schedule.

• Procedures--Provide a sequence of detailed procedures required to accomplish


the specific hardware and software implementation at this site. If necessary,
other documents may be referenced. If appropriate, include a step-by-step
sequence of the detailed procedures. A checklist of the installation events may
he provided to record the results of the process.

If the site operations startup is an important factor in the implementation, then


address startup procedures in some detail. If the system will replace an already
operating system, then address the startup and cutover processes in detail. If there
is a period of parallel operations with an existing system, address the startup
procedures that include technical and operations support during the parallel cycle
and the consistency of data within the databases of the two systems.

51
• Database--Describe the database environment where the software system and the
database(s), if any, will be installed. Include a description of the different types
of database and library environments (such as, production, test, and training
databases).

• Include the host computer database operating procedures, database file and
library naming conventions, database system generation parameters, and any
other information needed to effectively establish the system database
environment.
• Include database administration procedures for testing changes, if any, to the
database management system before the system implementation.
• Data Update--If data update procedures are described in another document, such
as the operations manual or conversion plan, that document may be referenced
here. The following are examples of information to be included:

- Control inputs
- Operating instructions
- Database data sources and inputs
- Output reports
- Restart and recovery procedures

Back-Off Plan
This section specifies when to make the go/no go decision and the factors to be
included in making the decision. The plan then goes on to provide a detailed list
of steps and actions required to restore the site to the original, pre-conversion
condition,

Post-Implementation Verification
This section describes the process for reviewing the implementation and deciding if
it was successful. It describes how an action item list will be created to rectify any
noted discrepancies. It also references the Back-Off Plan for instructions on how to
back-out the installation, if, as a result of the post-implementation verification, a
no-go decision is made.

Testing Phase

Definition of software testing

Software testing is an empirical examination carried out to provide stakeholders with


information about the quality of the product or service under test. Software Testing in
addition provides an objective, independent view of the software to allow the business to
value and comprehend the risks associated with implementation of the software..
Software Testing can also be viewed as the process of validating and verifying that a
software program/application/product (1) meets the business and technical requirements
that guided its design and development; (2) works as expected; and (3) can be
implemented with the same characteristics. It is important to note that depending on the
testing method used, software testing, can be applied at any time in the development
process, though most of the test effort occurs after the requirements have been defined
and the coding process has been completed.

Testing can never totally detect all the defects within software. Instead, it provides a
comparison that put side by side the state and behavior of the product against the
instrument someone applies to recognize a problem. These instruments may include
52
specifications, contracts, comparable products, past versions of the same product,
inferences about intended or expected purpose, user or customer expectations, relevant
standards, applicable laws, or other criteria.

Every software product has a target audience. For instance, the audience for video game
software is completely different from banking software. Software testing therefore, is the
process of attempting to make this assessment whether the software product will be
satisfactory to its end users, its target audience, its purchasers, and other stakeholders.

Brief History of software testing

In 1979, Glenford J. Myers introduced the separation of debugging from testing, illustrated
the desire of the software engineering community to separate fundamental development
activities, such as debugging, from that of verification. 1988, Dave Gelperin and William
C. Hetzel classified the phases and goals in software testing in the following stages:

• Until 1956 - Debugging oriented.


• 1957–1978 - Demonstration oriented.
• 1983–1987 - Evaluation oriented.
• 1988–2000 - Prevention oriented.

Testing methods
Traditionally, software testing methods are divided into black box testing ,white box
testing and Grey Box Testing. A test engineer used these approaches to describe his
opinion when designing test cases.

Black box testing

Black box testing considers the software as a "black box" in the sense that there is no
knowledge of internal implementation. Black box testing methods include: equivalence
partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing,
traceability matrix, exploratory testing and specification-based testing.

Specification-based testing: Specification-based testing intends to test the functionality


of software based on the applicable requirements. Consequently, the tester inputs data
into, and only sees the output from, the test object. This level of testing usually needs
thorough test cases to be supplied to the tester, who can then verify that for a given input,
the output value ,either "is" or "is not" the same as the expected value specified in the test
case.
Specification-based testing though necessary, but it is insufficient to guard against certain
risks.
Merits and Demerits: The black box testing has the advantage of "an unaffiliated opinion
in the sense that there is no "bonds" with the code and the perception of the tester is very
simple. He believes a code must have bugs and he goes for it. But, on the other hand, black
box testing has the disadvantage of blind exploring because the tester doesn't know how the
software being tested was actually constructed. As a result, there are situations when (1) a
tester writes many test cases to check something that could have been tested by only one
test case, and/or (2) some parts of the back-end are not tested at all.

White box testing

In a White box testing the tester has the privilege to the internal data structures and
algorithms including the code that implement these.

53
Types of white box testing
White box testing is of different types namely:

• API testing (application programming interface) - Testing of the application using


Public and Private APIs
• Code coverage - creating tests to satisfy some criteria of code coverage (e.g., the
test designer can create tests to cause all statements in the program to be executed
at least once)
• Fault injection methods
• Mutation testing methods
• Static testing - White box testing includes all static testing

Grey Box Testing

Grey box testing requires gaining access to internal data structures and algorithms for
purposes of designing the test cases, but testing at the user, or black-box level. Manipulating
input data and formatting output cannot be regarded as grey box, because the input and
output are clearly outside of the "black-box" that we are calling the system under test. This
difference is important especially when conducting integration testing between two
modules of code written by two different developers, where only the interfaces are exposed
for test. However, changing a data repository can be seen as grey box, because the use
would not ordinarily be able to change the data outside of the system under test. Grey box
testing may also include reverse engineering to ascertain boundary values or error messages.

Integration Testing

Integration testing is any type of software testing that seeks to reveal clash of individual
software modules to each other. Such integration flaws can result, when the new modules are
developed in separate branches, and then integrated into the main project.

Regression Testing

Regression testing is any type of software testing that attempts to reveal software
regressions. Regression of the nature can occurs at any time software functionality, that was
previously working correctly, stops working as anticipated. Usually, regressions occur as
an unplanned result of program changes, when the newly developed part of the software
collides with the previously existing. Methods of regression testing include re- running
previously run tests and finding out whether previously repaired faults have re- appeared.
The extent of testing depends on the phase in the release process and the risk of the added
features.

Acceptance testing

One of two things below can be regarded as Acceptance testing:

1. A smoke test which is used as an acceptance test prior to introducing a new build
to the main testing process, i.e. before integration or regression.
2. Acceptance testing performed by the customer, usually in their lab environment on
their own HW, is known as user acceptance testing (UAT).

Non Functional Software Testing

The following methods are used to test non-functional aspects of software:

54
• Performance testing confirms to see if the software can deal with large quantities of
data or users. This is generally referred to as software scalability. This activity of
Non Functional Software Testing is often referred to as Endurance Testing.
• Stability testing checks to see if the software can continuously function well in or
above an acceptable period. This activity of Non Functional Software Testing is
oftentimes referred to as load (or endurance) testing.
• Usability testing is used to check if the user interface is easy to use and understand.
• Security testing is essential for software that processes confidential data to prevent
system intrusion by hackers.
• Internationalization and localization is needed to test these aspects of software, for
which a pseudo localization method can be used.

Compare to functional testing, which establishes the correct operation of the software in
that it matches the expected behavior defined in the design requirements, non-functional
testing confirms that the software functions properly even when it receives invalid or
unexpected inputs. Non-functional testing, especially for software, is meant to establish
whether the device under test can tolerate invalid or unexpected inputs, thereby
establishing the robustness of input validation routines as well as error-handling routines.
An example of non-functional testing is software fault injection, in the form of fuzzing.

Destructive testing

Destructive testing attempts to cause the software or a sub-system to fail, in order to test its
robustness.

Testing process

Testing process can take two forms: Usually the testing can be performed by an
independent group of testers after the functionality is developed before it is sent to the
customer. Another practice is to start software testing at the same time the project starts
and it continues until the project finishes. The first practice always results in the testing
phase being used as project buffer to compensate for project delays, thereby
compromising the time devoted to testing.

Testing can be done on the following levels:

• Unit testing tests the minimal software component, or module. Each unit (basic
component) of the software is tested to verify that the detailed design for the unit
has been correctly implemented. In an object-oriented environment, this is usually
at the class level, and the minimal unit tests include the constructors and
destructors.
• Integration testing exposes defects in the interfaces and interaction between
integrated components (modules). Progressively larger groups of tested software
components corresponding to elements of the architectural design are integrated and
tested until the software works as a system.
• System testing tests a completely integrated system to verify that it meets its
requirements.
• System integration testing verifies that a system is integrated to any external or
third party systems defined in the system requirements.

Before shipping the final version of software, alpha and beta testing are often done
additionally:

55
• Alpha testing is simulated or actual operational testing by potential users/customers
or an independent test team at the developers' site. Alpha testing is often employed
for off-the-shelf software as a form of internal acceptance testing, before the
software goes to beta testing.
• Beta testing comes after alpha testing. Versions of the software, known as beta
versions, are released to a limited audience outside of the programming team. The
software is released to groups of people so that further testing can ensure the
product has few faults or bugs. Sometimes, beta versions are made available to the
open public to increase the feedback field to a maximal number of future users.

Finally, acceptance testing can be conducted by the end-user, customer, or client to


validate whether or not to accept the product. Acceptance testing may be performed as
part of the hand-off process between any two phases of development.

Benchmarks may be employed during regression testing to ensure that the performance of
the newly modified software will be at least as acceptable as the earlier version or, in the
case of code optimization, that some real improvement has been achieved.

Testing Tools

Program testing and fault detection can be aided significantly by testing tools and debuggers.
Testing/debug tools include features such as:

• Program monitors, permitting full or partial monitoring of program code including:


o Instruction Set Simulator, permitting complete instruction level monitoring and
trace facilities
o Program animation, permitting step-by-step execution and
conditional breakpoint at source level or in machine code o Code
coverage reports
• Formatted dump or Symbolic debugging, tools allowing inspection of program
variables on error or at chosen points
• Automated functional GUI testing tools are used to repeat system-level tests through
the GUI
• Benchmarks, allowing run-time performance comparisons to be made
• Performance analysis (or profiling tools) that can help to highlight hot spots and
resource usage

Software Quality Assurance (SQA)

Concepts and Definitions


Software Quality Assurance (SQA) is defined as a planned and systematic approach to
the evaluation of the quality of and adherence to software product standards, processes,
and procedures. SQA includes the process of assuring that standards and procedures are
established and are followed throughout the software acquisition life cycle. Compliance
with agreed-upon standards and procedures is evaluated through process monitoring,
product evaluation, and audits. Software development and control processes should
include quality assurance approval points, where an SQA evaluation of the product may
be done in relation to the applicable standards.

56
Standards and Procedures
Establishing standards and procedures for software development is critical, since these
provide the structure from which the software evolves. Standards are the established
yardsticks to which the software products are compared. Procedures are the established
criteria to which the development and control processes are compared.
Standards and procedures establish the prescribed methods for developing software; the
SQA role is to ensure their existence and adequacy. Proper documentation of standards
and procedures is necessary since the SQA activities of process monitoring, product
evaluation and auditing rely upon clear definitions to measure project compliance.

Types of standards include:


• Documentation Standards specify form and content for planning, control, and
product documentation and provide consistency throughout a project.
• Design Standards specify the form and content of the design product. They
provide rules and methods for translating the software requirements into the
software design and for representing it in the design documentation.
• Code Standards specify the language in which the code is to be written and
define any restrictions on use of language features. They define legal
language structures, style conventions, rules for data structures and
interfaces, and internal code documentation.
Procedures are explicit steps to be followed in carrying out a process. All processes
should have documented procedures. Examples of processes for which procedures are
needed are configuration management, non-conformance reporting and corrective
action, testing, and formal inspections.
If developed according to the NASA DID, the Management Plan describes the software
development control processes, such as configuration management, for which there have
to be procedures, and contains a list of the product standards.
Standards are to be documented according to the Standards and Guidelines DID in the
Product Specification. The planning activities required to assure that both products and
processes comply with designated standards and procedures are described in the QA
portion of the Management Plan.

Software Quality Assurance Activities


Product evaluation and process monitoring are the SQA activities that assure the
software development and control processes described in the project's Management Plan
are correctly carried out and that the project's procedures and standards are followed.
Products are monitored for conformance to standards and processes are monitored for
conformance to procedures. Audits are a key technique used to perform product
evaluation and process monitoring. Review of the Management Plan should ensure that
appropriate SQA approval points are built into these processes.

Product evaluation is an SQA activity that assures standards are being followed.
Ideally, the first products monitored by SQA should be the project's standards and
procedures. SQA assures that clear and achievable standards exist and then evaluates
compliance of the software product to the established standards. Product evaluation
assures that the software product reflects the requirements of the applicable standard(s)
as identified in the Management Plan.

57
Process monitoring is an SQA activity that ensures that appropriate steps to carry out
the process are being followed. SQA monitors processes by comparing the actual steps
carried out with those in the documented procedures. The Assurance section of the
Management Plan specifies the methods to be used by the SQA process monitoring
activity.
A fundamental SQA technique is the audit, which looks at a process and/or a product in
depth, comparing them to established procedures and standards. Audits are used to
review management, technical, and assurance processes to provide an indication of the
quality and status of the software product.

The purpose of an SQA audit is to assure that proper control procedures are being
followed, that required documentation is maintained, and that the developer's status
reports accurately reflect the status of the activity. The SQA product is an audit report
to management consisting of findings and recommendations to bring the development
into conformance with standards and/or procedures.

SQA Relationships to Other Assurance Activities


Some of the more important relationships of SQA to other management and assurance
activities are described below.

Configuration Management Monitoring


SQA assures that software Configuration Management (CM) activities are performed
in accordance with the CM plans, standards, and procedures. SQA reviews the CM
plans for compliance with software CM policies and requirements and provides
follow-up for nonconformances. SQA audits the CM functions for adherence to
standards and procedures and prepares reports of its findings.
The CM activities monitored and audited by SQA include baseline control,
configuration identification, configuration control, configuration status accounting,
and configuration authentication. SQA also monitors and audits the software library.
SQA assures that:

• Baselines are established and consistently maintained for use in subsequent


baseline development and control.

• Software configuration identification is consistent and accurate with respect to


the numbering or naming of computer programs, software modules, software
units, and associated software documents.
• Configuration control is maintained such that the software configuration used
in critical phases of testing, acceptance, and delivery is compatible with the
associated documentation.
• Configuration status accounting is performed accurately including the
recording and reporting of data reflecting the software's configuration
identification, proposed changes to the configuration identification, and the
implementation status of approved changes.

58
• Software configuration authentication is established by a series of
configuration reviews and audits that exhibit the performance required by the
software requirements specification and the configuration of the software is
accurately reflected in the software design documents.
• Software development libraries provide for proper handling of software code,
documentation, media, and related data in their various forms and versions
from the time of their initial approval or acceptance until they have been
incorporated into the final media.
• Approved changes to baselined software are made properly and consistently in
all products, and no unauthorized changes are made.

Verification and Validation Monitoring


SQA assures Verification and Validation (V&V) activities by monitoring technical
reviews, inspections, and walkthroughs.The SQA role in formal testing is described in
the next section. The SQA role in reviews,inspections, and walkthroughs is to observe,
participate as needed, and verify that they were properly conducted and documented.
SQA also ensures that any actions required are assigned, documented, scheduled, and
updated.Formal software reviews should be conducted at the end of each phase of the
life cycle to identify problems and determine whether the interim product meets all
applicable requirements. Examples of formal reviews are the Preliminary Design
Review (PDR), Critical Design Review (CDR), and Test Readiness Review (TRR). A
review looks at the overall picture of the product being developed to see if it satisfies its
requirements. Reviews are part of the development process, designed to provide a
ready/not-ready decision to begin the next phase. In formal reviews, actual work done is
compared with established standards. SQA's main objective in reviews is to assure that
the Management and Development Plans have been followed, and that the product is
ready to proceed with the next phase of development. Although the decision to proceed
is a management decision, SQA is responsible for advising management and
participating in the decision. An inspection or walkthrough is a detailed examination of
a product on a step-by-step or line-of-code by line-of-code basis to find errors. For
inspections and walkthroughs, SQA assures, at a minimum that the process is properly
completed and that needed follow-up is done. The inspection process may be used to
measure compliance to standards.

Formal Test Monitoring


SQA assures that formal software testing, such as Acceptance testing, is done in
accordance with plans and procedures. SQA reviews testing documentation for
completeness and adherence to standards. The documentation review includes test
plans,test specifications, test procedures, and test reports. SQA monitors testing and
provides follow-up on nonconformances. By test monitoring, SQA assures software
completeness and readiness for delivery. The objectives of SQA in monitoring formal
software testing are to assure that:

• The test procedures are testing the software requirements in accordance with test
plans.
• The test procedures are verifiable.
• The correct or "advertised" version of the software is being tested (by SQA
monitoring of the CM activity).
• The test procedures are followed.

59
• Nonconformances occurring during testing (that is, any incident not expected in
the test procedures) are noted and recorded.
• Test reports are accurate and complete.
• Regression testing is conducted to assure nonconformances have been corrected.
Resolution of all nonconformances takes place prior to delivery.

Software testing verifies that the software meets its requirements. The quality of
testing is assured by verifying that project requirements are satisfied and that the
testing process is in accordance with the test plans and procedures.

Software Quality Assurance during the Software Acquisition Life Cycle


In addition to the general activities described in subsections C and D, there are phase-
specific SQA activities that should be conducted during the Software Acquisition
Life Cycle. At the conclusion of each phase, SQA concurrence is a key element in the
management decision to initiate the following life cycle phase. Suggested activities for
each phase are described below.

Software Concept and Initiation Phase


SQA should be involved in both writing and reviewing the Management Plan in order
to assure that the processes, procedures, and standards identified in the plan are
appropriate, clear, specific, and auditable. During this phase, SQA also provides the
QA section of the Management Plan.

Software Requirements Phase


During the software requirements phase, SQA assures that software requirements are
complete, testable, and properly expressed as functional, performance, and interface
requirements.

Software Architectural (Preliminary) Design Phase

SQA activities during the architectural (preliminary) design phase include:


• Assuring adherence to approved design standards as designated in the
Management Plan.
• Assuring all software requirements are allocated to software components.
• Assuring that a testing verification matrix exists and is kept up to date.
• Assuring the Interface Control Documents are in agreement with the standard in
form and content.

• Reviewing PDR documentation and assuring that all action items are resolved.
Assuring the approved design is placed under configuration management.

60
Software Detailed Design Phase
SQA activities during the detailed design phase include:
• Assuring that approved design standards are followed.
• Assuring that allocated modules are included in the detailed design.
• Assuring that results of design inspections are included in the design.
• Reviewing CDR documentation and assuring that all action items are resolved.

Software Implementation Phase


SQA activities during the implementation phase include the audit of:
• Results of coding and design activities including the schedule contained in
the Software Development Plan.

• Status of all deliverable items.


• Configuration management activities and the software development library.
Nonconformance reporting and corrective action system.

Software Integration and Test Phase


SQA activities during the integration and test phase include:
• Assuring readiness for testing of all deliverable items.
• Assuring that all tests are run according to test plans and procedures and that
any non-conformances are reported and resolved.
• Assuring that test reports are complete and correct.
• Certifying that testing is complete and software and documentation are ready
for delivery.
• Participating in the Test Readiness Review and assuring all action items are
completed.

Software Acceptance and Delivery Phase


As a minimum, SQA activities during the software acceptance and delivery phase include
assuring the performance of a final configuration audit to demonstrate that all deliverable
items are ready for delivery.

Software Sustaining Engineering and Operations Phase


During this phase, there will be mini-development cycles to enhance or correct the
software. During these development cycles, SQA conducts the appropriate phase-
specific activities described above.
61
Techniques and Tools
SQA should evaluate its needs for assurance tools versus those available off-the-shelf for
applicability to the specific project, and must develop the others it requires. Useful tools
might include audit and inspection checklists and automatic code standards analyzers.

Compatibility

What is Compatibility Testing?


Software testing comes in different types. Compatibility testing is one of the several types
of software testing which can be carried out on a system that is develop based on certain
yardsticks and which has to perform definite functionality in an already existing
setup/environment. Many things are decided n compatibility of a system/application being
developed with, for example, other systems/applications, OS, Network. They include the
use of the system/application in that environment, demand of the system/application etc.
On many occasions, the reason while users prefer not to go for an application/system cannot
be unconnected with it non-compatibility of such application/system with any other
system/application, network, hardware or OS they are already using. This explains the
reason why the efforts of developers may appear to be in vain. Compatibility testing can
also be used to certify compatibility of the system/application/website built with various
other objects such as other web browsers, hardware platforms, users, operating systems etc.
It helps to find out how well a system performs in a particular environment such as
hardware, network; operating system etc. Compatibility testing can be performed manually
or with automation tools.

Compatibility testing computing environment.

. Computing environment that will require compatibly testing may include some or all of the
below mentioned elements:

• Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.)..


• Bandwidth handling capacity of networking hardware
• Compatibility of peripherals (Printer, DVD drive, etc.)
• Operating systems (MVS, UNIX, Windows, etc.)
• Database (Oracle, Sybase, DB2, etc.)
• Other System Software (Web server, networking/ messaging tool, etc.)
• Browser compatibility (Firefox, Netscape, Internet Explorer, Safari, etc.)
Browser compatibility testing which can also be referred to as user experience testing
requires that the web applications are tested on different web browsers, to ensure the
following:

• Users have the same visual experience irrespective of the browsers through which
they view the web application.
• In terms of functionality, the application must behave and respond the same way
across different browsers.

Compatibility between versions: This has to do with testing of the performance of


system/application in connection with its own predecessor/successor versions. This is
sometimes referred to as backward and forward compatibility. For example, Windows 98
was developed with backward compatibility for Windows 95.

62
Software Compatibility testing: This is the evaluation of the performance of
system/application in connection with other software. For example: Software
compatibility with operating tools for network, web servers, messaging tools etc.

Operating System compatibility testing: This is the evaluation of the performance of


system/application in connection with the underlying operating system on which it will
be used.

Databases compatibility testing: Many applications/systems operate on databases.


Database compatibility testing is used to evaluate an application/system‘s performance in
connection to the database it will interact with.

Usefulness of Compatibility Testing

Compatibility testing can help developers understand the yardsticks that their
system/application needs to reach and fulfil, so as to get acceptance by intended users
who are already using some OS, network, software and hardware etc. It also helps the
users to find out which system will better fit in the existing setup they are using.

Certification testing falls within the range of Compatibility testing. Product Vendors do run
the complete suite of testing on the newer computing environment to get their application
certified for a specific Operating Systems or Databases.

Software verification and validation

What is Verification and Validation (V&V)

Verification and Validation (V&V) is the process of checking that a software system meets
specifications and that it fulfils its expected purpose. It is normally part of the software
testing process of a project.

According to the Capability Maturity Model (CMMI-SW v1.1),

• Verification is the process of evaluating software to determine whether the


products of a given development phase satisfy the conditions imposed at the start of
that phase. [IEEE-STD-610].
• Validation is the process of evaluating software during or at the end of the
development process to determine whether it satisfies specified requirements.
[IEEE-STD-610]

In other words, validation ensures that the product actually meets the user's needs, and
that the specifications were correct in the first place, while verification is ensuring that the
product has been built according to the requirements and design specifications.
Validation ensures that ‗you built the right thing‘. Verification ensures that ‗you built it
right‘. Validation confirms that the product, as provided, will fulfill its intended use.

Looking at it from arena of modeling and simulation, the definitions of validation,


verification and accreditation are similar:

• Validation is the process of determining the degree to which a model, simulation,


or federation of models and simulations, and their associated data are accurate
representations of the real world from the perspective of the intended use(s).

63
Accreditation is the formal certification that a model or simulation is acceptable to be
used for a specific purpose.
• Verification is the process of determining that a computer model, simulation, or
federation of models and simulations implementations and their associated data
accurately represents the developer's conceptual description and specifications.

Classification of methods

In mission-critical systems where flawless performance is absolutely necessary, formal


methods can be used to ensure the correct operation of a system. However, often for non-
mission-critical systems, formal methods prove to be very costly and an alternative
method of V&V must be sought out. In this case, syntactic methods are often used.

Test cases

A test case is a tool used in the Verification and Validation process.

The Quality Assurance (QA) team prepares test cases for verification and these help to
determine if the process that was followed to develop the final product is right.

The Quality Certificate (QC) team uses a test case for validation and this will ascertain if the
product is built according to the requirements of the user. Other methods, such as reviews,
provide for validation in Software Development Life Cycle provided it is used early for
validation.

Independent Verification and Validation

Verification and validation often is carried out by a separate group from the development
team; in this case, the process is called "Independent Verification and Validation ", or
IV&V.
3.4 Regulatory environment
The task is must to meet the compliance requirements of law regulated industries, which
is often guided by government agencies or industrial administrative authorities. FDA
even demands to validate software versions and patches.
3.5 Software Verification & Validation Model
‗Verification & Validation Model‘ is used in improvement of software project
development life cycle.

64
Fig 8 Verification and Validation Model

Source: http://www.buzzle.com/editorials/4-5-2005-68117.asp

A perfect software product is developed when every step is taken in right direction. That
is to say that ―A right product is developed in a right manner‖. Software Verification
Model helps to achieve this and also help to improve the quality of the software product.

The model will not only will not only makes sure that certain rules are followed at the time
of development of a software but will also ensure that the product that is developed fulfils
the required specifications. The result is that risk associated with any software project up
to certain level is reduced by helping in detection and correction of errors and mistakes,
which are unknowingly done during the development process.

Few terms involved in Verification:

Inspection:
Inspection involves a team of few people usually about 3-6 people. It usually led by a
leader, which properly reviews the documents and work product during various phases of
the product development life cycle. The product, as well as related documents is
presented to the team, the members of which carry different interpretations of the
presentation. The bugs that are discovered during the inspection are conveyed to the next
level in order to take care of them.

Walkthroughs:
In walkthrough inspection is carried out without formal preparation (of any presentation or
documentations). During the walkthrough, the presenter/author introduces the material to
all the participants in order to make them familiar with it. Though walkthroughs can help
in finding bugs, they are used for knowledge sharing or communication purpose.

Buddy Checks:
Buddy Checks does not involve a team rather, one person goes through the documents
prepared by another person in order. to find out bugs which the author couldn‘t find
previously.

The activities involved in Verification process are: Requirement Specification


verification, Functional design verification, internal/system design verification and code

65
verification Each activity ascertains that the product is developed correctly and every
requirement, every specification, design code etc. is verified.

Terms used in Validation process:

Code Validation/Testing:
Unit Code Validation or Unit Testing is a type of testing, which the developers conduct in
order to find out any bug in the code unit/module developed by them. Code testing other than
Unit Testing can be done by testers or developers.

Integration Validation/Testing:
Integration testing is conducted in order to find out if different (two or more) units/modules
match properly. This test helps in finding out if there is any defect in the interface between
different modules.

Functional Validation/Testing:
This type of testing is meant to find out if the system meets the functional requirements. In
this type of testing, the system is validated for its functional behavior. Functional testing
does not deal with internal coding of the project, in stead, it checks if the system behaves
as per the expectations.

User Acceptance Testing or System Validation:


In this type of testing, the developed product is handed over to the user/paid testers in
order to test it in real time state. The product is validated to find out if it works according
to the system specifications and satisfies all the user requirements. As the user/paid
testers use the software, it may happen that bugs that are yet undiscovered, come up,
which are communicated to the developers to be fixed. This helps in improvement of the
final product.

66

You might also like