software Engineering
software Engineering
Computer software
Definition of software
Computer software is a general name for all forms of programs. A program itself is
a sequence of instruction which the computer follows to perform a given task.
Types of software
Software can be categorised into three major types namely system software,
programming software and application software..
System software
System software helps to run the computer hardware and the entire computer system.
It includes the following:
• device drivers
• operating systems
• servers
• utilities
• windowing systems
The function of systems software is to assist the applications programmer from the details
of the particular computer complex being used, including such peripheral devices as
communications, printers, readers, displays and keyboards, and also to partition the
computer's resources such as memory and processor time in a safe and stable manner.
Programming software
Programming software offers tools to assist a programmer in writing programs, and
software using different programming languages in a more convenient way. The tools
include:
• compilers
• debuggers
• interpreters
• linkers
• text editors
Application software
Application software is a class of software which the user of computer needs to accomplish
one or more definite tasks. The common applications include the following:
• industrial automation
• business software
• computer games
• quantum chemistry and solid state physics software
1
• telecommunications (i.e., the internet and everything that flows on it)
• databases
• educational software
• medical software
• military software
• molecular modeling software
• photo-editing
• spreadsheet
• Word processing
• Decision making software
Software engineering can be divided into ten sub-disciplines. They are as follows:
2
Software Engineering Goals and Principles
Goals
Stated requirements when they are initially specified for systems are usually incomplete.
Apart from accomplishing these stated requirements, a good software system must be
able to easily support changes to these requirements over the system's life. Therefore, a
major goal of software engineering is to be able to deal with the effects of these changes.
The software engineering goals include:
• Efficiency: The software system should use the resources that are available in an
optimal manner.
• Understand ability: The software should accurately model the view the reader has
of the real world. Since code in a large, long-lived software system is usually read
more times than it is written, it should be easy to read at the expense of being easy to
write, and not the other way around.
Principles
Sounds engineering principles must be applied throughout development, from the design
phase to final fielding of the system in order to attain a software system that satisfies the
above goals. These include:
• Information Hiding: The code should include no needless detail. Elements that do
not affect other segment of the system are inaccessible to the user, so that only the
intended operations can be performed. There are no "undocumented features".
• Uniformity: The notation and use of comments, specific keywords and formatting
is consistent and free from unnecessary differences in other parts of the code.
3
• Completeness: Nothing is deliberately missing from any module. All important or
relevant components are present both in the modules and in the overall system as
appropriate.
• Confirm ability: The modules of the program can be tested individually with
adequate rigor. This gives rise to a more readily alterable system, and enables the
reusability of tested components.
In the 1968, software engineering originated from the NATO Software Engineering
Conference. It came at the time of software crisis. The field of software engineering has
since then been growing gradually as a study dedicated to creating qualified software. In
spite of being around for a long time, it is a relatively young field compared to other fields
of engineering. Though some people are still confused whether software engineering is
actually engineering because software is more of invisible course. Although it is disputed
what impact it has had on actual software development over the last more than 40 years, the
field's future looks bright according to Money Magazine and Salary.com who rated
"software engineering" as the best job in America in 2006.
The early computers had their software wired with the hardware thereby making them to be
inflexible because the software could not easily be upgraded from one machine to another.
This problem necessitated the development. Programming languages started to appear in
the 1950s and this was also another major step in abstraction. Major languages such as
FORTRAN, ALGOL, and COBOL were released in the late 1950s to deal with scientific,
algorithmic, and business problems respectively. E.W. Dijkstra wrote his seminal paper,
"Go To Statement Considered Harmful", in 1968 and David Parnas introduced the key
concept of modularity and information hiding in 1972 to help programmers deal with the
ever increasing complexity of software systems. A software system for managing the
hardware called an operating system was also introduced, most notably by Unix in 1969. In
1967, the Simula language introduced the object-oriented programming paradigm.
The technological advancement in software has always been driven by the ever changing
manufacturing of various types of computer hardware. The more the new technologies
upgrade, from vacuum tube to transistor, and to microprocessor were emerging, the more
the necessity to upgrade and even write new software. In the mid 1980s software experts
had a consensus for centralised construction of software with the use of software
development Life Cycle from system analysis. This period gave birth to object-oriented
programming languages. Open-source software started to appear in the early 90s in the form
of Linux and other software introducing the "bazaar" or decentralized style of constructing
software.[10] Then the Internet and World Wide Web hit in the mid 90s changing the
engineering of software once again. Distributed Systems gained sway as a way to design
systems and the Java programming language was introduced as another step in abstraction
having its own virtual machine. Programmers collaborated and wrote the Agile Manifesto
that favored more light weight processes to create cheaper and more timely software.
There are a number of areas where the evolution of software engineering is notable:
4
• Professionanism: The early 1980s witnessed software engineering becoming a full-
fledged profession like computer science and other engineering fields.
• Impact of women: In the early days of computer development ( 1940s, 1950s, and
1960s,), the men were found in the hardware sector because of the mental demand
of hardwaring heavy duty equipment which was too strenuous for women. The
witing of software was delegated to the women. Some of the women who were into
many programming jobs at this time include Grace Hopper and Jamie Fenton.
Today, many fewer women work in software engineering than in other professions,
this reason for this is yet to be ascertained.
• Processes: Processes have become a great part of software engineering and re
praised for their ability to improve software and sharply condemned for their
potential to narrow programmers.
• Cost of hardware: The relative cost of software versus hardware has changed
substantially over the last 50 years. When mainframes were costly and needed large
support staffs, the few organizations purchasing them also had enough to fund big,
high-priced custom software engineering projects. Computers can now be said to be
much more available and much more powerful, which has a lot of effects on
software. The larger market can sustain large projects to create commercial
packages, as the practice of companies such as Microsoft. The inexpensive
machines permit each programmer to have a terminal capable of fairly rapid
compilation. The programs under consideration can use techniques such as garbage
collection, which make them easier and faster for the programmer to write.
Conversely, many fewer organizations are concerned in employing programmers
for large custom software projects, instead using commercial packages as much as
possible.
The most key development was that new computers were emerging almost every year or
two, making existing ones outdated. Programmers had to rewrite all their programs to run
on these new computers. They did not have computers on their desks and had to go to the
"computer room" or ―computer laboratory‖. Jobs were run by booking for machine time
or by operational staff. Jobs were run by inserting punched cards for input into the
computer‘s card reader and waiting for results to come back on the printer.
The field was so new that the idea of management using schedule was absent. Guessing the
completion time of project predictions was almost unfeasible Computer hardware was
application-based. Scientific and business tasks needed different machines. High level
languages like FORTRAN, COBOL, and ALGOL were developed to take care of the need
to frequently translate old software to meet the needs of new machines. Systems software
was given out for free by the vendors since it must to be installed in the computer before it
is sold. Custom software was sold by a few companies but no sale of packaged software.
Organisation such as like IBM's scientific user group SHARE gave out software free and as
a result reuse was order of the day. Academia did not yet teach the principles of computer
science. Modular programming and data abstraction were already being used in
programming.
5
The term software engineering came into existence in the late 1950s and early 1960s.
Programmers have always known about civil, electrical, and computer engineering but
fount it difficult to marry engineering with software.
In 1968 and 1969, two conferences on software engineering were sponsored by the NATO
Science Committee. This gave the field its initial boost. It was widely believed that these
conferences marked the official start of the profession of software engineering.
Software engineering was prompted by the software crisis of the 1960s, 1970s, and 1980s.
It was the crisis that identified many of the problems of software development. This era was
also characterised by: run over budget and schedule, property damage and loss of life caused
by poor project management. Initially the software crisis was defined in terms of
productivity, but advanced to emphasize quality.
• Cost and Budget Overruns: The OS/360 operating system was a classic example. It
was a decade-long project from the 1960s and eventually produced one of the most
complex software systems at the time.
• Property Damage: Software defects can result in property damage. Poor software
security allows hackers to steal identities, costing time, money, and reputations.
• Life and Death: Software defects can kill. Some embedded systems used in
radiotherapy machines failed so disastrously that they administered poisonous doses
of radiation to patients. The most famous of these failures is the Therac 25 incident.
For years, solving the software crisis was the primary concern for researchers and
companies producing software tools. Apparently, they proclaim every new technology and
practice from the 1970s to the 1990s as a silver bullet to solve the software crisis. Tools,
discipline, formal methods, process, and professionalism were published as silver bullets:
• Tools: Particularly underline tools include: Structured programming, objectoriented
programming, CASE tools, Ada, Java, documentation, standards, and
Unified Modeling Language were touted as silver bullets.
• Discipline: Some pundits argued that the software crisis was due to the lack of
discipline of programmers.
• Formal methods: Some believed that if formal engineering methodologies would be
applied to software development, then production of software would become as
predictable an industry as other branches of engineering. They advocated proving all
programs correct.
• Process: Many advocated the use of defined processes and methodologies like the
Capability Maturity Model.
• Professionalism: This led to work on a code of ethics, licenses, and professionalism.
Fred Brooks (1986), No Silver Bullet article, argued that no individual technology or
practice would ever make a 10-fold improvement in productivity within 10 years.
Debate about silver bullets continued over the following decade. Supporter for Ada,
components, and processes continued arguing for years that their favorite technology would
6
be a silver bullet. Skeptics disagreed. Eventually, almost everyone accepted that no silver
bullet would ever be found. Yet, claims about silver bullets arise now and again, even today.
” No silver bullet” means different things to different people; some take” no silver bullet”
to mean that software engineering failed. The pursuit for a single key to success never
worked. All known technologies and practices have only made incremental improvements
to productivity and quality. Yet, there are no silver bullets for any other profession, either.
Others interpret no silver bullet as evidence that software engineering has finally matured
and recognized that projects succeed due to hard work.
However, it could also be pointed out that there are, in fact, a series of silver bullets today,
including lightweight methodologies, spreadsheet calculators, customized browsers, in-site
search engines, database report generators, integrated design-test coding-editors with
memory/differences/undo, and specialty shops that generate niche software, such as
information websites, at a fraction of the cost of totally customized website development.
Nevertheless, the field of software engineering looks as if it is too difficult and different for
a single "silver bullet" to improve most issues, and each issue accounts for only a small
portion of all software problems.
It became easier to display and retrieve information as a result of the usage of browser on
the HTML language. The widespread of network connections brought in computer viruses
and worms on MS Windows computers. These new technologies brought in a lot good
innovations such as e-mailing, web-based searching, e-education to to mention a few. As a
result, many software systems had to be re-designed for international searching. It was also
required to translate the information flow in multiple foreign languages Many software
systems were designed for multi-language usage, based on design concepts from human
translators.
This era witnessed increasing demand for software in many smaller organizations. There
was also the need for inexpensive software solutions and this led to the growth of simpler,
faster methodologies that developed running software, from requirements to deployment.
There was a change from rapid-prototyping to entire lightweight methodologies. For
example, Extreme Programming (XP), tried to simplify many areas of software engineering,
including requirements gathering and reliability testing for the growing, vast number of
small software systems.
What is it today
7
Important figures in the history of software engineering Listed
below are some renowned software engineers:
• Charles Bachman (born 1924) is particularly known for his work in the area of
databases.
• Fred Brooks (born 1931)) best-known for managing the development of OS/360.
• Peter Chen, known for the development of entity-relationship modeling.
• Edsger Dijkstra (1930-2002) developed the framework for proper programming.
• David Parnas (born 1941) developed the concept of information hiding in modular
programming.
Software Engineer
• Designs, develops and modifies software systems, using scientific analysis and
mathematical models to predict and measure outcome and consequences of design.
• Determines system performance standards.
• Develops and direct software system testing and validation procedures, programming,
and documentation.
8
• Stores, retrieves, and manipulates data for analysis of system capabilities and
requirements.
Most employers commonly recognise the technical and functional knowledge statements
listed below as general occupational qualifications for Computer Software Engineers
Although it is not required for the software engineer to have all of the knowledge on the list
in order to be a successful performer, adequate knowledge, skills, and abilities are necessary
for effective delivery of service.
• Structure and content of the English language including the meaning and
spelling of words, rules of composition, and grammar.
• Principles and methods for curriculum and training design, teaching and
instruction for individuals and groups, and the measurement of training
effects.
• Principles and processes for providing customer and personal services. This
includes customer needs assessment, meeting quality standards for services, and
evaluation of customer satisfaction.
Occupations have traits or characteristics which give important clues about the nature of the
work and work environment and offer you an opportunity to match your own personal
interests to a specific occupation.
9
Software engineer occupational characteristics or features can be categorised as: Realistic,
Investigative and Conventional as described below:
Realistic — Realistic occupations frequently involve work activities that include practical,
hands-on problems and solutions. They often deal with plants, animals, and real-world
materials like wood, tools, and machinery. Many of the occupations require working
outside, and do not involve a lot of paperwork or working closely with others.
Software Crisis.
The term software crisis was used in the early days of software engineering. It was used to
describe the impact of prompt increases in computer power and the difficulty of the
problems which could be tackled. In essence, it refers to the difficulty of writing correct,
understandable, and verifiable computer programs. The sources of the software crisis are
complexity, expectations, and change.
Conflicting requirements has always hindered software development process. For instance,
while users demand a large number of features, customers generally want to minimise the
amount they must pay for the software and the time required for its development.
F. L. Bauer coined the term "software crisis" at the first NATO Software Engineering
Conference in 1968 at Garmisch, Germany. The term was used early in Edsger Dijkstra's
1972 ACM Turing Award Lecture:
The major cause of the software crisis is that the machines have become more powerful!
This implied that: as long as there were no machines, programming was no problem at
all; when there were few weak computers, programming became a mild problem, and now
with huge computers, programming has equally become a huge problem.
The challenging practical areas include: fiscal, human resource, infrastructure, and
marketing.. The very causes of failure in software development industries can be from two
areas twofold: 1) Poor marketing efforts, and 2) Lack of quality products.
10
Poor marketing efforts
The problem of poor marketing efforts is more noticeable in the developing economies,
where consumers of software products prefers imported software to the detriment of locally
developed ones. This problem is compounded by poor marketing approaches and the fact
that most of the hardware was not manufactured locally. Though the use of software in our
industries, service providing organizations, and other commercial institutions is increasing
appreciably, the demand of locally developed software products is not going faster at the
same rate.
One of the major reasons of this is lack of any established national policy that can speed up
the creation of internal market for locally developed software products. Relatively low price
of foreign (especially from the neighbouring country) software attracts the consumers in
acquiring foreign products rather than buying local one.
One may wants to ask why the clients will go for local software. In this situation, the
question may also be why is that the foreign software products are cheaper than the locally
developed software products? The answers to these questions are not far fetched. The cost
of initial take off of producing software product is significantly much higher than its
subsequent versions because the latter can be produced by merely copying the initial one.
Most of the foreign software products available in the market are their succeeding versions.
For this reason, the consumers in our country do not have to bear the initial cost of the
development. Furthermore, this software is more reliable as they already have reputable
high report. Many international commercial companies use these products efficiently.
On the contrary, most of the software firms in Bangladesh for example, need to charge the
initial cost for development for their clients even though the reliability of their products is
quite uncertain. Consequently, the local clients are not interested in buying local software
products. To change this situation, the government must take steps by imposing high tax on
foreign software products and by implementing strict copyright act for the use of software
products.
International market
Apart from developing internal software market, we also need to aim at the international
market. At present, as our software firms have no high report in developing software
products, competing with other country will just be a fruitless effort. India for example has
a high profile as far as software development is concern. India has been in global market for
at least twenty years. India can take advantage of buying software from global market
because of the long-time experience as well as availability of many high level IT experts at
relatively low cost compared with the developed countries. Apart from these, India has
professional immigrant communities in the US and in other developed countries who have
succeeded in influencing the global market to procure software projects for India.
We cannot, therefore compete with India at this time to buy software projects from the
global market. However, there is the need to have a policy to boost our marketing strategy
to procure global software projects. One of the ways to do this is to allow country like
Bangladesh through its embassies/high commissions to open up a special software
marketing unit in different developed countries. Apart from this our professional expatriates
living in the USA and other developed countries can also assist by setting up software firms
to procure software projects to be developed in Bangladesh at low cost.
In the area of software development, timing is a essential factor. Inability to deliver the
product to time can lead to loss of clients. . Our observation has shown that client cancel
11
out work order when the software firms failed to meet up the deadline. Failure to meet up
deadline for any software project may result in negative attitude to our software marketing
efforts
Pricing. One of the major challenges to software developer is how to put price on the
product. Most of the time, the question is "How much should our product go for. On one
hand, asking too little price will be jeopardized because in that case developers will no be
able to brake even. On the other hand, charging too much for the product will be a barrier
to our marketing efforts. In order to solve this problem, scientific economic theories needs
to be applied.
These theories must be applied when the software companies fix the prices of their product.
One major lesson here is that we that are just starting in the global software market should
minimise our profit margin.
Lack of expertise in producing sound user requirements: Allowing the developing firms
to go through some defined software development steps as suggested in software
engineering discipline is a pathway to ensure the quality of software products.. The very
first step is to analyze the users' requirement and designing of the system vastly depends on
defining users' requirement precisely.
Ideally system analysts should do all sorts of analysis to produce user requirement analysis
documents. Regrettably, in Bangladesh, a few firms do not pay much attention to producing
sound user requirement documents. This reveals lack of theoretical knowledge in system
analysis and design. To produce high quality requirement analysis documents there is needs
for an in-depth theoretical knowledge in system analysis and design. But many of local
software development firms lack the expertise in this field. In order to rectify this problem,
academics in the field have to be consulted to give necessary assistance that will gear
towards producing sound user requirement analysis documents.
Lack of expertise in designing the system: Aside user requirement analysis, another
important aspect is the development process is the designing part of the software product.
The design of any system affects the effectiveness of any implemented software. Again, one
of the major problems confronting our software industries is non availability of expert
software designers. It is a fact to point out that out what we have on ground are programmers
or coders but the number of experienced and expert software engineers is till not many.
In fact, we rarely have resourceful persons who can guide large and complex software
projects properly our software industries. The result is that there are no quality end products
It may be mentioned here that sound academic knowledge in software engineering is a must
for developing a quality software system. A link between industries and academic
12
institutions can improve this situation. The utilisation of theoretical sound knowledge of
academics in industrial software project cannot be overlooked. Besides depending on the
complexity of the project, software firms may need to involve foreign experts for specific
period to complete the project properly.
There is need to follow some specific model in software development process. The practice
in many software development firms is not to follow any particular model, and this has so
much affected the quality of software product. It is mandatory for a software developer,
therefore, to select a model prior to starting a software project so as to have quality product.
Absence of proper software testing procedure: For us to have quality software production
the issue of software testing should be taken with utmost seriousness. demands exhaustive
test to check its performance. Many theoretical testing methodologies abound to check the
performance and integrity of the software. It is rather unfortunate to note that many
developing firms go ahead to , hastily deliver the end products to their clients without
performing extensive test. The result of this is that many software products are not free from
bugs. It should be pointed here that fixing the bugs after is costlier than during the
developing time. It is therefore important for developers to perform the test phase of the
development before delivering the end product to the clients.
Various processes and methodologies have been developed over the last few decades to
"tame" the software crisis, with varying degrees of success. However, it is widely agreed
that there is no "silver bullet" ― that is, no single approach which will prevent project
overruns and failures in all cases. In general, software projects which are large, complicated,
poorly-specified, and involve unfamiliar aspects, are still particularly vulnerable to large,
unanticipated problems
13
Overview of Software Development
Software development is the set of activities that results in software products. Software
development may include research, new development, modification, reuse, re- engineering,
maintenance, or any other activities that result in software products. Particularly the first
phase in the software development process may involve many departments, including
marketing, engineering, research and development and general management.
The term software development may also refer to computer programming, the process of
writing and maintaining the source code.
There are several different approaches to software development. While some take a more
structured, engineering-based approach, others may take a more incremental approach,
where software evolves as it is developed piece-by-piece. In general, methodologies share
some combination of the following stages of software development:
• Market research
• Gathering requirements for the proposed business solution
• Analyzing the problem
• Devising a plan or design for the software-based solution
• Implementation (coding) of the software
• Testing the software
• Deployment
• Maintenance and bug fixing
These stages are collectively referred to as the software development lifecycle (SDLC).,
These stages may be carried out in different orders, depending on approach to software
development. Time devoted on different stages may also vary. The detail of the
documentation produced at each stage may not be the same.. In ―waterfall‖ based
approach, stages may be carried out in turn whereas in a more "extreme" approach, the
stages may be repeated over various cycles or iterations. It is important to note that more
―extreme‖ approach usually involves less time spent on planning and documentation, and
more time spent on coding and development of automated tests. More ―extreme‖
approaches also encourage continuous testing throughout the development lifecycle. It
ensures bug-free product at all times. The ―waterfall‖ based approach attempts to assess
the majority of risks and develops a detailed plan for the software before implementation
(coding) begins. It avoids significant design changes and re-coding in later stages of the
software development lifecycle.
Each methodology has its merits and demerits. The choice of an approach to solving a
problem using software depends on the type of problem. If the problem is well
understood and a solution can be effectively planned out ahead of time, the more
"waterfall" based approach may work the best choice. On the other hand, if the problem
is unique (at least to the development team) and the structure of the software solution
cannot be easily pictured, then a more "extreme" incremental approach may work best..
14
Software Development Life Cycle Model
Software life cycle models describe phases of the software cycle and the order in which
those phases are executed. There are a lot of models, and many companies adopt their own,
but all have very similar patterns. According to Raymond Lewallen (2005), the general, basic
model is shown below:
Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.
Each phase produces deliverables needed by the next phase in the life cycle.
Requirements are converted into design. Code is generated during implementation that is
driven by the design. Testing verifies the deliverable of the implementation phase against
requirements.
Waterfall Model
This is the most common life cycle models, also referred to as a linear-sequential life cycle
model. It is very simple to understand and use. In a waterfall model, each phase must be
completed before the next phase can begin. At the end of each phase, there is always a
review to ascertain if the project is in the right direction and whether or not to carry on or
abandon the project. Unlike the general model, phases do not overlap in a waterfall model.
Disadvantages
V-Shaped Model
Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of
processes. Each phase must be completed before the next phase begins. Testing is
emphasized in this model more so than the waterfall model The testing procedures are
developed early in the life cycle before any coding is done, during each of the phases
preceding implementation.
Requirements begin the life cycle model just like the waterfall model. Before development
is started, a system test plan is created. The test plan focuses on meeting the functionality
specified in the requirements gathering.
The high-level design phase focuses on system architecture and design. An integration test
plan is created in this phase as well in order to test the pieces of the software systems
ability to work together.
The low-level design phase is where the actual software components are designed, and unit
tests are created in this phase as well.
The implementation phase is, again, where all coding takes place. Once coding is complete,
the path of execution continues up the right side of the V where the test plans developed
earlier are now put to use.
16
Fig 3 V-Shaped Life Cycle Model
Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.
Advantages
• Simple and easy to use.
• Each phase has specific deliverables.
• Higher chance of success over the waterfall model due to the development of test
plans early on during the life cycle.
• Works well for small projects where requirements are easily understood.
Disadvantages
Incremental Model
The first iteration produces a working version of software and this makes possible to have
working software early on during the software life cycle. Subsequent iterations build on the
initial software produced during the first iteration.
Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.
Advantages
• Generates working software quickly and early during the software life cycle.
• More flexible – inexpensive to change scope and requirements.
• Easier to test and debug during a smaller iteration.
• Easier to manage risk because risky pieces are identified and handled during its
iteration.
• Each of the iterations is an easily managed landmark
Disadvantages
Spiral Model
The spiral model is similar to the incremental model, with more emphases placed on risk
analysis. The spiral model has four phases namely Planning, Risk Analysis, Engineering and
Evaluation. A software project continually goes through these phases in iterations which are
called spirals. In the baseline spiral requirements are gathered and risk is assessed. Each
subsequent spiral builds on the baseline spiral.
Requirements are gathered during the planning phase. In the risk analysis phase, a process
is carried out to discover risk and alternate solutions. A prototype is produced at the end
of the risk analysis phase.
Software is produced in the engineering phase, alongside with testing at the end of the
18
phase. The evaluation phase provides the customer with opportunity to evaluate the
output of the project to date before the project continues to the next spiral.
In the spiral model, the angular component denotes progress, and the radius of the spiral
denotes cost.
Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.
Merits
Demerits
Requirements Phase
Business requirements are gathered in this phase. This phase is the main center of
attention of the project managers and stake holders. Meetings with managers, stake
holders and users are held in order to determine the requirements. Th general questions
that require answers during a requirements gathering phase are: Who is going to use the
system? How will they use the system? What data should be input into the system? What
19
data should be output by the system? A list of functionality that the system should
provide, which describes functions the system should perform, business logic that
processes data, what data is stored and used by the system, and how the user interface
should work is produced at this point. The requirements development phase may have
been preceded by a feasibility study, or a conceptual analysis phase of the project. The
requirements phase may be divided into requirements elicitation (gathering the
requirements from stakeholders), analysis (checking for consistency and completeness),
specification (documenting the requirements) and validation (making sure the specified
requirements are correct)
Types of Requirements
Requirements analysis
20
The Need for Requirements Analysis
Studies reveal that insufficient attention to Software Requirements Analysis at the beginning
of a project is the major reason for critically weak projects that often do not fulfil basic tasks
for which they were designed. Software companies are now spending time and resources on
effective and streamlined Software Requirements Analysis
Processes as a condition to successful projects that support the customer‘s business goals and
meet the project‘s requirement specifications.
Requirements Analysis Process: Requirements Elicitation, Analysis And Specification
Requirements Analysis is the process of understanding the client needs and expectations
from a proposed system or application. It is a well-defined stage in the Software
Development Life Cycle model.
Requirements are a description of how a system should behave, in other words, a description
of system properties or attributes. Considering the numerous levels of dealings between
users, business processes and devices in worldwide corporations today, there are immediate
and composite requirements from a single application, from different levels within an
organization and outside it
The Software Requirements Analysis Process involves the complex task of eliciting and
documenting the requirements of all customers, modelling and analyzing these requirements
and documenting them as a foundation for system design.
This job (requirements analysis process) is dedicated to a specialized Requirements
Analyst. The Requirements Analysis function may also come under the scope of Project
Manager, Program Manager or Business Analyst, depending on the organizational
hierarchy.
Requirements elicitation
Here information is gathered from the multiple stakeholders identified. The Requirements
Analyst brings out from each of these groups what their requirements from the application
are and what they expect the application to achieve. Taking into account the multiple
stakeholders involved, the list of requirements gathered in this manner could go into pages.
The level of detail of the requirements list depends on the number and size of user groups,
the degree of complexity of business processes and the size of the application.
21
Problems faced in Requirements Elicitation
Ambiguous understanding of processes
• Inconsistency within a single process by multiple users
• Insufficient input from stakeholders
• Conflicting stakeholder interests
• Changes in requirements after project has begun
Requirements Analysis
The moment all stakeholder requirements have been gathered, a structured analysis of these
can be done after modeling the requirements. Some of the Software Requirements Analysis
techniques used are requirements animation, automated reasoning, knowledge- based
critiquing, consistency checking, analogical and case-based reasoning.
Requirements Specification
After requirements have been elicited, modeled and analyzed, they should be
documented in clear, definite terms. A written requirements document is crucial and as
such its circulation should be among all stakeholders including the client, user-groups, the
development and testing teams. It has been observed that a well-designed, clearly
documented Requirements Specification is vital and serves as a:
• Base for validating the stated requirements and resolving stakeholder conflicts, if any
• Contract between the client and development team
• Basis for systems design for the development team
• Bench-mark for project managers for planning project development lifecycle and goals
• Source for formulating test plans for QA and testing teams
• Resource for requirements management and requirements tracing
• Basis for evolving requirements over the project life span
Software requirements specification involves scoping the requirements so that it meets the
customer‘s vision. It is the result of teamwork between the end-user who is usually not a
technical expert, and a Technical/Systems Analyst, who is expected to approach the
situation in technical terms.
22
The software requirements specification is a document that lists out stakeholders‘ needs
and communicates these to the technical community that will design and build the system.
It is really a challenge to communicate a well-written requirements specification, to both
these groups and all the sub-groups within. To overcome this, Requirements
Specifications may be documented separately as:
• User Requirements - written in clear, precise language with plain text and use cases,
for the benefit of the customer and end-user
• System Requirements - expressed as a programming or mathematical model, meant to
address the Application Development Team and QA and Testing Team.
Requirements Specification serves as a starting point for software, hardware and database
design. It describes the function (Functional and Non-Functional specifications) of the
system, performance of the system and the operational and user-interface constraints that will
govern system development.
Requirements Management
Requirements Management is the all-inclusive process that includes all aspects of software
requirements analysis and as well ensures verification, validation and traceability of
requirements. Effective requirements management practices assure that all system
requirements are stated unmistakably, that omissions and errors are corrected and that
evolving specifications can be included later in the project lifecycle.
Design Phase
The software system design is formed from the results of the requirements phase. This is
where the details on how the system will work are produced. Deliverables in this phase
include hardware and software, communication, software design.
The design process is very important. As a labourer, for example one would not attempt
to build a house without an approved blueprint so as not to risk the structural integrity
and customer satisfaction. In the same way, the approach to building software products is
no unlike. The emphasis in design is on quality. It is pertinent to note that, this is the only
phase in which the customer‘s requirements can be precisely translated into a finished
software product or system. As such, software design serves as the foundation for all
software engineering steps that follow regardless of which process model is being
employed.
During the design process the software specifications are changed into design models that
express the details of the data structures, system architecture, interface, and components.
Each design product is re-examined for quality before moving to the next phase of
software development. At the end of the design process a design specification document
23
is produced. This document is composed of the design models that describe the data,
architecture, interfaces and components.
• Data design – created by changing the analysis information model (data dictionary
and ERD) into data structures needed to implement the software. Part of the data
design may occur in combination with the design of software architecture. More
detailed data design occurs as each software component is designed.
• Architectural design - defines the relationships among the major structural
elements of the software, the ―design patterns‖ that can be used to attain the
requirements that have been defined for the system, and the constraint that affect the
way in which the architectural patterns can be applied. It is derived from the system
specification, the analysis model, and the subsystem interactions defined in the
analysis model (DFD).
• Interface design - explains how the software elements communicate with each
other, with other systems, and with human users. Much of the necessary information
required is provided by the e data flow and control flow diagrams.
• Component-level design – It converts the structural elements defined by the
software architecture into procedural descriptions of software components using
information acquired from the process specification (PSPEC), control specification
(CSPEC), and state transition diagram (STD).
Design Guidelines
In order to assess the quality of a design (representation) the yardstick for a good design
should be established. Such a design should:
These criteria are not acquired by chance. The software design process promotes good design
through the application of fundamental design principles, systematic methodology and
through review.
Design Principles
―The design process is a series of steps that allow the designer to describe all aspects of
the software to be built. However, it is not merely a recipe book; for a competent and
24
successful design, the designer must use creative skill, past experience, a sense of what
makes ―good‖ software, and have a commitment to quality.
The set of principles which has been established to help the software engineer in directing
the design process are:
• The design process should not suffer from tunnel vision – Alternative
approaches should be considered by a good designer. Designer should judge
each approach based on the requirements of the problem, the resources
available to do the job and any other constraints.
• The design should be traceable to the analysis model – because a single
element of the design model often traces to multiple requirements, it is
necessary to have a means of tracking how the requirements have been
satisfied by the model
• The design should not reinvent the wheel – Systems are constructed using a
suite of design patterns, many of which may have likely been encountered
before. These patterns should always be chosen as an alternative to reinvention.
Design time should be spent in expressing truly fresh ideas and incorporating
those patterns that already exist.
• The design should reduce intellectual distance between the software and the
problem as it exists in the real world – This means that, the structure of the
software design should imitate the structure of the problem domain.
• The design should show uniformity and integration – a design is uniform if it
appears that one person developed the whole thing. Rules of style and format
should be defined for a design team before design work begins. A design is
integrated if care is taken in defining interfaces between design components.
• The design should be structured to degrade gently, even with bad data, events,
or operating conditions are encountered – Well-designed software should never
―bomb‖. It should be designed to accommodate unusual circumstances, and if
it must terminate processing, do so in a graceful manner.
• The design should be reviewed to minimize conceptual (semantic) errors –
there is sometimes the tendency to focus on minute details when the design is
reviewed, missing the forest for the trees. The designer team should ensure that
major conceptual elements of the design have been addressed before worrying
about the syntax if the design model.
• Design is not coding, coding is not design – Even when detailed designs are
created for program components, the level of abstraction of the design model is
higher than source code. The only design decisions made of the coding level
address the small implementation details that enable the procedural design to
be coded.
• The design should be structured to accommodate change
• The design should be assessed for quality as it is being created
With proper application of design principles, the design displays both external and
internal quality factors. External quality factors are those factors that can readily be
observed by the user, (e.g. speed, reliability, correctness, usability). Internal quality
factors have to do with technical quality more so the quality of the design itself. To
achieve internal quality factors the designer must understand basic design concepts.
Over the past four decades, a set of fundamental software design concepts has evolved,
each providing the software designer with a foundation from which more sophisticated
design methods can be applied. Each concept assists the soft ware engineer to answer the
following questions:
25
• What criteria can be used to partition software into individual components?
• Are there uniform criteria that define the technical quality of a software design?
Modularity
What is Modularity?
The concept of modularity in computer software has been promoted for about five decades.
In essence, the software is divided into separately names and addressable components called
modules that are integrated to satisfy problem requirements. It is important to note that a
reader cannot easily understand large programs with a single module. The number of
variables, control paths and sheer complexity make understanding almost impossible. As a
result a modular approach will allow for the software to be intellectually manageable.
However, it is important to note that software cannot be subdivided indefinitely so as to
26
make the effort required to understand or develop it negligible. This is because the more the
number of modules, the less the effort to develop them.
Logical Modularity
Physical Modularity
27
that can be independently managed and maintained. Fixes in one portion of the
code does not necessarily affect the entire system.
• Easier Modification & Maintenance: post-production system maintenance is
another crucial benefit of modular design. Developers have the ability to fix and
make non-infrastructural changes to module without affecting other modules.
The updated module can independently go through the build and release cycle
without the need to re-build and redeploy the entire system.
• Functionally Scalable: depending on the level of sophistication of your modular
design, it's possible to introduce new functionalities with little or no change to
existing modules. This allows your software system to scale in functionality
without becoming brittle and a burden on developers.
• Process-oriented design
This approach places the emphasis on the process with the objective being to design modules
that have high cohesion and low coupling. (Data flow analysis and data flow diagrams are
often used.)
• Data-oriented design
In this approach the data comes first. That is the structure of the data is determined first and
then procedures are designed in a way to fit to the structure of the data.
• Object-oriented design
In this approach, the objective is to first identify the objects and then build the product
around them. In concentrate, this technique is both data- and process-oriented.
28
Attributes of a good Module
• Evaluate the first iteration of the program structure to reduce coupling and
improve cohesion. Once program structure has been developed modules may
be exploded or imploded with aim of improving module independence. o
An exploded module becomes two or more modules in the final program
structure.
o An imploded module is the result of combining the processing implied by two
or more modules.
An exploded module normally results when common processing exists in two or more
modules and can be redefined as a separate cohesive module. When high coupling is
expected, modules can sometimes be imploded to reduce passage of control, reference to
global data and interface complexity.
• Attempt to minimise structures with high fan-out; strive for fan-in as structure
depth increases. The structure shown inside the cloud in Fig. 3 does not make
effective use of factoring.
29
Fig 6 Example of a program structure
• Keep the scope of effect of a module within the scope of control for that
module. o The scope of effect of a module is defined as all other modules
that are affected by a decision made by that module. For example, the scope of
control of module e is all modules that are subordinate i.e. modules f, g, h, n, p
and q.
• Define modules whose function is predictable and not overly restrictive (e.g. a
module that only implements a single task). o A module is predictable
when it can be treated as a black box; that is, the same external data will be
produced regardless of internal processing details. Modules that have internal
―memory‖ can be unpredictable unless care is taken in their use.
o A module that restricts processing to a single task exhibits high cohesion and
is viewed favourably by a designer.
• Strive for controlled entry modules, avoid pathological connection (e.g. branches
into the middle of another module) o This warns against content coupling.
Software is easier to understand and maintain if the module interfaces are
constrained and controlled.
Languages that formally support the module concept include IBM/360 Assembler,
COBOL, RPG and PL/1, Ada, D, F, Fortran, Haskell, OCaml, Pascal, ML, Modula-2,
Erlang, Perl, Python and Ruby. The IBM System i also uses Modules in RPG, COBOL and
CL, when programming in the ILE environment. Modular programming can be performed
even where the programming language lacks explicit syntactic features to support named
modules.
Software tools can create modular code units from groups of components. Libraries of
components built from separately compiled modules can be combined into a whole by using
a linker.
30
small, while assembling a system with the help of a MIL represents programming in the
large. An example of MIL is MIL-75.
Top-Down Design
The method of writing a program using top-down approach is to write a main procedure
that names all the major functions it will need. After that the programming team
examines the requirements of each of those functions and repeats the process. These
compartmentalized sub-routines finally will perform actions so straightforward they can
be easily and concisely coded. The program is done when all the various sub-routines
have been coded.
• Separating the low level work from the higher level abstractions leads to a modular
design.
• Modular design means development can be self contained.
• Having "skeleton" code illustrates clearly how low level modules integrate.
Fewer operations errors
• Much less time consuming (each programmer is only concerned in a part of the big
project).
• Very optimized way of processing (each programmer has to apply their own
knowledge and experience to their parts (modules), so the project will become an
optimized one).
• Easy to maintain (if an error occurs in the output, it is easy to identify the errors
generated from which module of the entire program).
Bottom-up approach
In a bottom-up approach the individual base elements of the system are first specified in
great detail. These elements are then connected together to form bigger subsystems,
which are linked, sometimes in many levels, until a complete top-level system is formed.
This strategy often resembles a "seed" model, whereby the beginnings are small, but
eventually grow in complexity and completeness.
. This bottom-up approach has one drawback. We need to use a lot of perception to decide
the functionality that is to be provided by the module. This approach is more suitable if a
system is to be developed from existing system, because it starts from some existing
modules. Modern software design approaches usually mix both top-down and bottom-up
approaches.
31
Pseudo code
Here are a few general guidelines for writing your pseudo code:
Mimic good code and good English. Using aspects of both systems means adhering
to the style rules of both to
some degree. It is still important that variable names be mnemonic, comments
be included where useful, and English phrases be comprehensible (full
sentences are usually not necessary).
Ignore unnecessary details. If you are worrying about the placement of commas,
you are using too much detail. It is a
good idea to use some convention to group statements (begin/end, brackets, or
whatever else is clear), but you shouldn't obsess about syntax.
Don't belabor the obvious. In many cases, the type of a variable is clear from
context; unless it is critical that it is specified to be an integer or real, it is often
unnecessary to make it explicit.
Take advantage of programming shorthands. Using if-then-else or looping
structures is more concise than writing
out the equivalent in English; general constructs that are not peculiar to a small
number of languages are good candidates
for use in pseudocode. Using parameters in specifying procedures is concise,
clear, and accurate, and hence should not be omitted from pseudocode.
Consider the context. If you are writing an algorithm for quicksort, the statement
use quicksort to sort the values is
hiding too much detail; if you have already studied quicksort in a class and later
use it as a subroutine in another
algorithm, the statement would be appropriate to use.
Don't lose sight of the underlying model. It should be possible to see through"
your pseudocode to the model below; if not (that is, you are not able to analyze
the algorithm easily), it is written at too high a level.
Check for balance. If the pseudocode is hard for a person to read or difficult to
translate into working code (or worse yet, both!), then something is wrong with
the level of detail you have chosen to use.
Examples of Pseudocode
Example 1 - Computing Sales Value Added (VAT) Tax : Pseudo-code the task of computing
the final price of an item after figuring in sales tax. Note the three types of instructions:
input (get), process/calculate (=) and output (display)
32
3. VAT = price of time times VAT rate 4 final price = price of
item plus VAT
5. display final price
6. stop
Variables: price of item, sales tax rate, sales tax, final price
Note that the operations are numbered and each operation is unambiguous and effectively
computable. We also extract and list all variables used in our pseudo-code. This will be
useful when translating pseudo-code into a programming language
Example 2 - Computing Weekly Wages: Gross pay depends on the pay rate and the
number of hours worked per week. However, if you work more than 50 hours, you get
paid time-and-a-half for all hours worked over 50. Pseudo-code the task of computing
gross pay given pay rate and hours worked.
This example presents the conditional control structure. On the basis of the true/false
question asked in line 3, line 3.1 is executed if the answer is True; otherwise if the answer
is False the lines subordinate to line 4 (i.e. line 4.1) is executed. In both cases pseudo-
code is resumed at line 5.
33
This example presents an iterative control statement. As long as the condition in line 4 is
True, we execute the subordinate operations 4.1 - 4.3. When the condition is False, we return
to the pseudo-code at line 5.
This is an example of a top-test or while do iterative control structure. There is also a bottom-
test or repeat until iterative control structure which executes a block of statements until the
condition tested at the end of the block is False.
For looping and selection, The keywords that are to be used include Do While...EndDo; Do
Until...Enddo; Case...EndCase; If...Endif; Call ... with (parameters); Call; Return ..... ;
Return; When; Always use scope terminators for loops and iteration.
As verbs, use the words Generate, Compute, Process, etc. Words such as set, reset,
increment, compute, calculate, add, sum, multiply, .....print, display, input, output, edit,
test , etc. with careful indentation tend to foster desirable pseudocode.
Do not include data declarations in your pseudo code.
Programming environments gives the basic tools and Application Programming Interfaces,
or APIs, necessary to construct programs. Programming environments help the creation,
modification, execution and debugging of programs. The goal of integrating a programming
environment is more than simply building tools that share a common data base and provide
a consistent user interface. Altogether, the programming environment appears to the
programmer as a single tool; there are no firewalls separating the various functions provided
by the environment.
The history of software tools began with the first computers in the early 1950s that used
linkers, loaders, and control programs. In the early 1970s the tools became famous with Unix
with tools like grep, awk and make that were meant to be combined flexibly with pipes. The
term "software tools" came from the book of the same name by Brian Kernighan and P. J.
Plauger. Originally, Tools were simple and light weight. As some tools have been
maintained, they have been integrated into more powerful integrated development
environments (IDEs). These environments combine functionality into one place, sometimes
increasing simplicity and productivity, other times part with flexibility and extensibility. The
workflow of IDEs is routinely contrasted with alternative approaches, such as the use of
Unix shell tools with text editors like Vim and Emacs.
The difference between tools and applications is unclear. For example, developers use simple
databases (such as a file containing a list of important values) all the time as tools. However a
full-blown database is usually thought of as an application in its own right.
For many years, computer-assisted software engineering (CASE) tools were preferred. CASE
tools emphasized design and architecture support, such as for UML. But the most successful
of these tools are IDEs.
34
The ability to use a variety of tools productively is one quality of a skilled software engineer.
Software development tools can be roughly divided into the following categories:
• Debuggers: gdb, GNU Binutils, valgrind. Debugging tools also are used in the
process of debugging code, and can also be used to create code that is more compliant
to standards and portable than if they were not used.
Integrated development environments (IDEs) merge the features of many tools into one
complete package. They are usually simpler and make it easier to do simple tasks, such as
searching for content only in files in a particular project. IDEs are often used for
development of enterprise-level applications.Some examples of IDEs are:
• Delphi
• C++ Builder (CodeGear)
• Microsoft Visual Studio
• EiffelStudio
• GNAT Programming Studio
• Xcode
• IBM Rational Application Developer
• Eclipse
• NetBeans
• IntelliJ IDEA WinDev
• Code::Blocks Lazarus
CASE tools are a class of software that automates many of the activities involved in
various life cycle phases. For example, when establishing the functional requirements of
a proposed application, prototyping tools can be used to develop graphic models of
application screens to assist end users to visualize how an application will look after
development. Subsequently, system designers can use automated design tools to
transform the prototyped functional requirements into detailed design documents.
Programmers can then use automated code generators to convert the design documents
36
into code. Automated tools can be used collectively, as mentioned, or individually. For
example, prototyping tools could be used to define application requirements that get
passed to design technicians who convert the requirements into detailed designs in a
traditional manner using flowcharts and narrative documents, without the assistance of
automated design software.
It is the scientific application of a set of tools and methods to a software system which is
meant to result in high-quality, defect-free, and maintainable software products. It also
refers to methods for the development of information systems together with automated
tools that can be used in the software development process.
Many CASE tools not only yield code but also generate other output typical of various
systems analysis and design methodologies such as:
History of CASE
The term CASE was originally formulated by software company, Nastec Corporation of
Southfield, Michigan in 1982 with their original integrated graphics and text editor
GraphiText, which also was the first microcomputer-based system to use hyperlinks to
cross-reference text strings in documents Under the direction of Albert F. Case, Jr. vice
president for product management and consulting, and Vaughn Frick, director of product
management, the DesignAid product suite was expanded to support analysis of a wide range
of structured analysis and design methodologies, notable Ed Yourdon and Tom DeMarco,
Chris Gane & Trish Sarson, Ward-Mellor (real-time) SA/SD and Warnier-Orr (data driven).
The next competitor into the market was Excelerator from Index Technology in
Cambridge, Mass. While DesignAid ran on Convergent Technologies and later
Burroughs Ngen networked microcomputers, Index launched Excelerator on the IBM PC/
AT platform. While, at the time of launch, and for several years, the IBM platform did not
support networking or a centralized database as did the Convergent Technologies or
Burroughs machines, the allure of IBM was strong, and Excelerator came to prominence.
37
Hot on the heels of Excelerator were a rash of offerings from companies such as
Knowledgeware (James Martin, Fran Tarkenton and Don Addington), Texas Instrument's
IEF and Accenture's FOUNDATION toolset (METHOD/1, DESIGN/1, INSTALL/1, FCP).
CASE tools were at their peak in the early 1990s. At the time IBM had proposed AD/Cycle
which was an alliance of software vendors centered around IBM's Software repository using
IBM DB2 in mainframe and OS/2:
The application development tools can be from several sources: from IBM, from vendors,
and from the customers themselves. IBM has entered into relationships with Bachman
Information Systems, Index Technology Corporation, and Knowledgeware, Inc. wherein
selected products from these vendors will be marketed through an IBM complementary
marketing program to provide offerings that will help to achieve complete life-cycle
coverage.
With the decline of the mainframe, AD/Cycle and the Big CASE tools died off, opening
the market for the mainstream CASE tools of today. Interestingly, nearly all of the leaders
of the CASE market of the early 1990s ended up being purchased by Computer
Associates, including IEW, IEF, ADW, Cayenne, and Learmonth & Burchett
Management Systems (LBMS).
Workbenches and environments are generally built as collections of tools. Tools can
therefore be either stand alone products or components of workbenches and
environments.
CASE Environment
An environment is a collection of CASE tools and workbenches that supports the software
process. CASE environments are classified based on the focus/basis of integration
• Toolkits
• Language-centered
• Integrated
• Fourth generation
• Process-centered
Toolkits
Toolkits are loosely integrated collections of products easily extended by aggregating
different tools and workbenches. Typically, the support provided by a toolkit is limited to
programming, configuration management and project management. And the toolkit itself is
environments extended from basic sets of operating system tools, for example, the Unix
Programmer's Work Bench and the VMS VAX Set. In addition, toolkits' loose integration
requires user to activate tools by explicit invocation or simple control mechanisms. The
resulting files are unstructured and could be in different format, therefore the access of file
from different tools may require explicit file format conversion. However, since the only
38
constraint for adding a new component is the formats of the files, toolkits can be easily and
incrementally extended.
Language-centered
The environment itself is written in the programming language for which it was developed,
thus enable users to reuse, customize and extend the environment. Integration of code in
different languages is a major issue for language-centered environments. Lack of process
and data integration is also a problem. The strengths of these environments include good
level of presentation and control integration. Interlisp, Smalltalk, Rational, and KEE are
examples of language-centered environments.
Integrated
These environments achieve presentation integration by providing uniform, consistent, and
coherent tool and workbench interfaces. Data integration is achieved through the
repository concept: they have a specialized database managing all information produced
and accessed in the environment. Examples of integrated environment are IBM AD/Cycle
and DEC Cohesion.
Fourth generation
Forth generation environments were the first integrated environments. They are sets of
tools and workbenches supporting the development of a specific class of program:
electronic data processing and business-oriented applications. In general, they include
programming tools, simple configuration management tools, document handling facilities
and, sometimes, a code generator to produce code in lower level languages. Informix
4GL, and Focus fall into this category.
Process-centered
Environments in this category focus on process integration with other integration dimensions
as starting points. A process-centered environment operates by interpreting a process model
created by specialized tools. They usually consist of tools handling two functions:
All aspects of the software development life cycle can be supported by software tools,
and so the use of tools from across the spectrum can, arguably, be described as CASE;
from project management software through tools for business and functional analysis,
system design, code storage, compilers, translation tools, test software, and so on.
However, it is the tools that are concerned with analysis and design, and with using
design information to create parts (or all) of the software product, that are most
frequently thought of as CASE tools. CASE applied, for instance, to a database software
product, might normally involve:
39
CASE Risk
HIPO Diagrams
The HIPO (Hierarchy plus Input-Process-Output) technique is a tool for planning and/or
documenting a computer program. A HIPO model consists of a hierarchy chart that
graphically represents the program‘s control structure and a set of IPO (Input-Process-
Output) charts that describe the inputs to, the outputs from, and the functions (or processes)
performed by each module on the hierarchy chart.
Using the HIPO technique, designers can evaluate and refine a program‘s design, and
correct flaws prior to implementation. Given the graphic nature of HIPO, users and
managers can easily follow a program‘s structure. The hierarchy chart serves as a useful
planning and visualization document for managing the program development process. The
IPO charts define for the programmer each module‘s inputs, outputs, and algorithms.
In theory, HIPO provides valuable long-term documentation. However, the ―text plus
flowchart‖ nature of the IPO charts makes them difficult to maintain, so the documentation
often does not represent the current state of the program.
40
By its very nature, the HIPO technique is best used to plan and/or document a hierarchically
structured program.
The HIPO technique is often used to plan or document a structured program A variety of
tools, including pseudocode (and structured English can be used to describe processes on
an IPO chart. System flowcharting symbols are sometimes used to identify physical
input, output, and storage devices on an IPO chart.
Components of HIPO
A completed HIPO package has two parts. A hierarchy chart is used to represent the top-
down structure of the program. For each module depicted on the hierarchy chart, an IPO
(Input-Process-Output) chart is used to describe the inputs to, the outputs from, and the
process performed by the module.
• Manage inventory
• Update stock
• Process sale
• Process return
• Process shipment
• Generate report
• Respond to query
• Display status report
• Maintain inventory data
• Modify record
• Add record
• Delete record
41
Figure 7 A hierarchy chart for an interactive inventory control program.
Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html
At the top of Figure 7 is the main control module, Manage inventory (module 1.0). It
accepts a transaction, determines the transaction type, and calls one of its three
subordinates (modules 2.0, 3.0, and 4.0).
Lower-level modules are identified relative to their parent modules; for example, modules
2.1, 2.2, and 2.3 are subordinates of module 2.0, modules 2.1.1, 2.1.2, and 2.1.3 are
subordinates of 2.1, and so on. The module names consist of an active verb followed by a
subject that suggests the module‘s function.
The objective of the module identifiers is to uniquely identify each module and to
indicate its place in the hierarchy. Some designers use Roman numerals (level I, level II)
or letters (level A, level B) to designate levels. Others prefer a hierarchical numbering
scheme; e.g., 1.0 for the first level; 1.1, 1.2, 1.3 for the second level; and so on. The key
is consistency.
The box at the lower-left of Figure 7 is a legend that explains how the arrows on the
hierarchy chart and the IPO charts are to be interpreted. By default, a wide clear arrow
represents a data flow, a wide black arrow represents a control flow, and a narrow arrow
indicates a pointer.
42
An IPO chart is prepared to document each of the modules on the hierarchy chart.
Overview diagrams
An overview diagram is a high-level IPO chart that summarizes the inputs to, processes or
tasks performed by, and outputs from a module. For example, shows an overview diagram
for process 2.0, Update stock. Where appropriate, system flowcharting symbols are used to
identify the physical devices that generate the inputs and accept the outputs. The processes
are typically described in brief paragraph or sentence form. Arrows show the primary input
and output data flows.
Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html
Overview diagrams are primarily planning tools. They often do not appear in the completed
documentation package.
Detail diagrams
A detail diagram is a low-level IPO chart that shows how specific input and output data
elements or data structures are linked to specific processes. In effect, the designer
integrates a system flowchart into the overview diagram to show the flow of data and control
through the module.
Figure 7.2 shows a detail diagram for module 2.0, Update stock. The process steps are
written in pseudocode. Note that the first step writes a menu to the user screen and input
43
data (the transaction type) flows from that screen to step 2. Step 3 is a case structure. Step
4 writes a transaction complete message to the user screen.
The solid black arrows at the top and bottom of the process box show that control flows from
module 1.0 and, upon completion, returns to module 1.0. Within the case structure (step 3)
are other solid black arrows.
Following case 0 is a return (to module 1.0). The two-headed black arrows following
cases 1, 2, and 3 represent subroutine calls; the off-page connector symbols (the little
home plates) identify each subroutine‘s module number. Note that each subroutine is
documented in a separate IPO chart. Following the default case, the arrow points to an
on-page connector symbol numbered 1. Note the matching on-page connector symbol
pointing to the select structure. On-page connectors are also used to avoid crossing
arrows on data flows.
Often, detailed notes and explanations are written on an extended description that is
attached to each detail diagram. The notes might specify access methods, data types, and
so on.
Figure 64.4 shows a detail diagram for process 2.1. The module writes a template to the
user screen, reads a stock number and a quantity from the screen, uses the stock number
as a key to access an inventory file, and updates the stock on hand. Note that the logic
44
repeats the data entry process if the stock number does not match an inventory record. A
real IPO chart is likely to show the error response process in greater detail.
Some designers simplify the IPO charts by eliminating the arrows and system flowchart
symbols and showing only the text. Often, the input and out put blocks are moved above
the process block (Figure 64.5), yielding a form that fits better on a standard 8.5 × 11
(portrait orientation) sheet of paper. Some programmers insert modified IPO charts
similar to Figure 64.5 directly into their source code as comments. Because the
documentation is closely linked to the code, it is often more reliable than stand-alone
HIPO documentation, and more likely to be maintained.
Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html
45
Detail diagram —
A low-level IPO chart that shows how specific input and output data elements or data
structures are linked to specific processes.
Hierarchy chart —
A diagram that graphically represents a program‘s control structure.
HIPO (Hierarchy plus Input-Process-Output) —
A tool for planning and/or documenting a computer program that utilizes a hierarchy
chart to graphically represent the program‘s control structure and a set of IPO
(Input-Process-Output) charts to describe the inputs to, the outputs from, and the
functions performed by each module on the hierarchy chart.
IPO (Input-Process-Output) chart —
A chart that describes or documents the inputs to, the outputs from, and the
functions (or processes) performed by a program module.
Overview diagram —
A high-level IPO chart that summarizes the inputs to, processes or tasks
performed by, and outputs from a module.
Visual Table of Contents (VTOC) —
A more formal name for a hierarchy chart.
Software
In the 1970s and early 1980s, HIPO documentation was typically prepared by hand using a
template. Some CASE products and charting programs include HIPO support. Some forms
generation programs can be used to generate HIPO forms. The examples in this # were
prepared using Visio.
What is Implementation?
Code is formed from the deliverables of the design phase during implementation. It is the
longest phase of the software development life cycle. Since code is produce here, the
developer regards this phase as the main focus of the life cycle. Implementation my
overlap with both the design and testing phases. As we learnt in previous unit many tools
exists (CASE tools) to actually automate the production of code using information gathered
and produced during the design phase. The implementation phase concerns with issues of
quality, performance, baselines, libraries, and debugging. The end deliverable is the
product itself.
Phase Deliverable
Implementation Code
Critical Error
Removal
Table 1 The Implementation Phase
46
Critical Error Removal
There are three kinds of errors in a system, namely critical errors, non-critical errors, and
unknown errors.
A critical error prevents the system from fully satisfying its usage. The errors have to
be corrected before the system can be given to a customer or even before future
development can progress.
A non-critical error is known but the occurrence of the error does not notably affect the
system's expected quality. There may indeed be many known errors in the system. They are
usually listed in the release notes and have well established work arounds.
Actually, the system is likely to have many, yet-to-be-discovered errors. The outcome of
these errors is unknown. Some may become critical while some may be simply fixed by
patches or fixed in the next release of the system.
By defining the goals, projects, and deliverables your company will have greater direction
during the changeover. The goals and projects must be measurable. The following
questions, for examples, may be necessary: Is it your goal to have 25% of your staff
comfortable enough to train the remaining staff? Do you want full implementation of the
software by March? By utilizing Six Sigma metrics careful monitoring of team
productivity and implementation success is possible.
Goals and projects must be usable with metrics. By using Six Sigma measurement
methods, it is possible to follow user understanding, familiarity, and progress accurately.
It should be noted that, continuous data is more useful than discrete data. This is because
it gives a better implementation success rate overview.
3. Implementation Analysis
Analysis is important to tackle defects occurrence. The Six Sigma method examines essential
relationships and ensures all factors are considered. For example, in a software
47
implementation trial, employees are frustrated and confused by new processes. Careful
analysis will look at the reasons behind the confusion.
4. Implementation Improvement
After analysis, it is important to look at how the implementation could improve. In the
example utilizing the team members, perhaps utilizing proficient resources to mentor
struggling resources will help. Six Sigma improvements depends upon experimental
design and carefully constructed analysis of data in order to keep further defects in the
implementation process at bay.
Security
If suitable for the system to be implemented, there is need to include an overview
of the system security features and requirements during the implementation.
48
System Security Features
It is pertinent to discuss the security features that will be associated with the system
when it is implemented. It should include the primary security features associated
with the system hardware and software. Security and protection of sensitive bureau
data and information should be discussed.
Implementation Support
This part describes the support such as: software, materials, equipment, and facilities
necessary for the implementation, as well as the personnel requirements and training
essential for the implementation.
Hardware
This section offers a list of support equipment and includes all hardware used for
testing time implementation. For example, if a client/server database is
implemented on a LAN, a network monitor or ―sniffer‖ might be used, along with
test programs. to determine the performance of the database and LAN at high-
utilization rates
Software
This section provides a list of software and databases required to support the
implementation. Identify the software by name, code, or acronym. Identify which
software is commercial off-the-shelf and which is State-specific. Identify any
software used to facilitate the implementation process.
Facilities
This section identifies the physical facilities and accommodations required during
implementation. Examples include physical workspace for assembling and testing
hardware components, desk space for software installers, and classroom space for
training the implementation stall. Specify the hours per day needed, number of
days, and anticipated dates.
Material
This section provides a list of required support materials, such as magnetic tapes
and disk packs.
Personnel
This section describes personnel requirements and any known or proposed staffing
requirements. It also describes the training, to be provided for the implementation
staff.
49
Personnel Requirements and Staffing
This section, describes the number of personnel, length of time needed, types of
skills, and skill levels for the staff required during the implementation period. If
particular staff members have been selected or proposed for the implementation,
identify them and their roles in the implementation.
Present a training curriculum listing the courses that will be provided, a course
sequence and a proposed schedule. If appropriate, identify which courses
particular types of staff should attend by job position description.
If training will be provided by one or more commercial vendors, identify them, the
course name(s), and a brief description of the course content.
If the training will be provided by State staff, provide the course name(s) and an
outline of the content of each course. Identify the resources, support materials,
and proposed instructors required to teach the course(s).
Performance Monitoring
This section describes the performance monitoring tool and techniques and how it will
be used to help decide if the implementation is successful.
50
Site Requirements
This section defines the requirements that must he met for the orderly
implementation of the system and describes the hardware, software, and site-
specific facilities requirements for this area.
Any site requirements that do not fall into the following three categories and were
not described in Section 3, Implementation Support, may be described in this
section, or other subsections may be added following Facilities Requirements
below:
51
• Database--Describe the database environment where the software system and the
database(s), if any, will be installed. Include a description of the different types
of database and library environments (such as, production, test, and training
databases).
• Include the host computer database operating procedures, database file and
library naming conventions, database system generation parameters, and any
other information needed to effectively establish the system database
environment.
• Include database administration procedures for testing changes, if any, to the
database management system before the system implementation.
• Data Update--If data update procedures are described in another document, such
as the operations manual or conversion plan, that document may be referenced
here. The following are examples of information to be included:
- Control inputs
- Operating instructions
- Database data sources and inputs
- Output reports
- Restart and recovery procedures
Back-Off Plan
This section specifies when to make the go/no go decision and the factors to be
included in making the decision. The plan then goes on to provide a detailed list
of steps and actions required to restore the site to the original, pre-conversion
condition,
Post-Implementation Verification
This section describes the process for reviewing the implementation and deciding if
it was successful. It describes how an action item list will be created to rectify any
noted discrepancies. It also references the Back-Off Plan for instructions on how to
back-out the installation, if, as a result of the post-implementation verification, a
no-go decision is made.
Testing Phase
Testing can never totally detect all the defects within software. Instead, it provides a
comparison that put side by side the state and behavior of the product against the
instrument someone applies to recognize a problem. These instruments may include
52
specifications, contracts, comparable products, past versions of the same product,
inferences about intended or expected purpose, user or customer expectations, relevant
standards, applicable laws, or other criteria.
Every software product has a target audience. For instance, the audience for video game
software is completely different from banking software. Software testing therefore, is the
process of attempting to make this assessment whether the software product will be
satisfactory to its end users, its target audience, its purchasers, and other stakeholders.
In 1979, Glenford J. Myers introduced the separation of debugging from testing, illustrated
the desire of the software engineering community to separate fundamental development
activities, such as debugging, from that of verification. 1988, Dave Gelperin and William
C. Hetzel classified the phases and goals in software testing in the following stages:
Testing methods
Traditionally, software testing methods are divided into black box testing ,white box
testing and Grey Box Testing. A test engineer used these approaches to describe his
opinion when designing test cases.
Black box testing considers the software as a "black box" in the sense that there is no
knowledge of internal implementation. Black box testing methods include: equivalence
partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing,
traceability matrix, exploratory testing and specification-based testing.
In a White box testing the tester has the privilege to the internal data structures and
algorithms including the code that implement these.
53
Types of white box testing
White box testing is of different types namely:
Grey box testing requires gaining access to internal data structures and algorithms for
purposes of designing the test cases, but testing at the user, or black-box level. Manipulating
input data and formatting output cannot be regarded as grey box, because the input and
output are clearly outside of the "black-box" that we are calling the system under test. This
difference is important especially when conducting integration testing between two
modules of code written by two different developers, where only the interfaces are exposed
for test. However, changing a data repository can be seen as grey box, because the use
would not ordinarily be able to change the data outside of the system under test. Grey box
testing may also include reverse engineering to ascertain boundary values or error messages.
Integration Testing
Integration testing is any type of software testing that seeks to reveal clash of individual
software modules to each other. Such integration flaws can result, when the new modules are
developed in separate branches, and then integrated into the main project.
Regression Testing
Regression testing is any type of software testing that attempts to reveal software
regressions. Regression of the nature can occurs at any time software functionality, that was
previously working correctly, stops working as anticipated. Usually, regressions occur as
an unplanned result of program changes, when the newly developed part of the software
collides with the previously existing. Methods of regression testing include re- running
previously run tests and finding out whether previously repaired faults have re- appeared.
The extent of testing depends on the phase in the release process and the risk of the added
features.
Acceptance testing
1. A smoke test which is used as an acceptance test prior to introducing a new build
to the main testing process, i.e. before integration or regression.
2. Acceptance testing performed by the customer, usually in their lab environment on
their own HW, is known as user acceptance testing (UAT).
54
• Performance testing confirms to see if the software can deal with large quantities of
data or users. This is generally referred to as software scalability. This activity of
Non Functional Software Testing is often referred to as Endurance Testing.
• Stability testing checks to see if the software can continuously function well in or
above an acceptable period. This activity of Non Functional Software Testing is
oftentimes referred to as load (or endurance) testing.
• Usability testing is used to check if the user interface is easy to use and understand.
• Security testing is essential for software that processes confidential data to prevent
system intrusion by hackers.
• Internationalization and localization is needed to test these aspects of software, for
which a pseudo localization method can be used.
Compare to functional testing, which establishes the correct operation of the software in
that it matches the expected behavior defined in the design requirements, non-functional
testing confirms that the software functions properly even when it receives invalid or
unexpected inputs. Non-functional testing, especially for software, is meant to establish
whether the device under test can tolerate invalid or unexpected inputs, thereby
establishing the robustness of input validation routines as well as error-handling routines.
An example of non-functional testing is software fault injection, in the form of fuzzing.
Destructive testing
Destructive testing attempts to cause the software or a sub-system to fail, in order to test its
robustness.
Testing process
Testing process can take two forms: Usually the testing can be performed by an
independent group of testers after the functionality is developed before it is sent to the
customer. Another practice is to start software testing at the same time the project starts
and it continues until the project finishes. The first practice always results in the testing
phase being used as project buffer to compensate for project delays, thereby
compromising the time devoted to testing.
• Unit testing tests the minimal software component, or module. Each unit (basic
component) of the software is tested to verify that the detailed design for the unit
has been correctly implemented. In an object-oriented environment, this is usually
at the class level, and the minimal unit tests include the constructors and
destructors.
• Integration testing exposes defects in the interfaces and interaction between
integrated components (modules). Progressively larger groups of tested software
components corresponding to elements of the architectural design are integrated and
tested until the software works as a system.
• System testing tests a completely integrated system to verify that it meets its
requirements.
• System integration testing verifies that a system is integrated to any external or
third party systems defined in the system requirements.
Before shipping the final version of software, alpha and beta testing are often done
additionally:
55
• Alpha testing is simulated or actual operational testing by potential users/customers
or an independent test team at the developers' site. Alpha testing is often employed
for off-the-shelf software as a form of internal acceptance testing, before the
software goes to beta testing.
• Beta testing comes after alpha testing. Versions of the software, known as beta
versions, are released to a limited audience outside of the programming team. The
software is released to groups of people so that further testing can ensure the
product has few faults or bugs. Sometimes, beta versions are made available to the
open public to increase the feedback field to a maximal number of future users.
Benchmarks may be employed during regression testing to ensure that the performance of
the newly modified software will be at least as acceptable as the earlier version or, in the
case of code optimization, that some real improvement has been achieved.
Testing Tools
Program testing and fault detection can be aided significantly by testing tools and debuggers.
Testing/debug tools include features such as:
56
Standards and Procedures
Establishing standards and procedures for software development is critical, since these
provide the structure from which the software evolves. Standards are the established
yardsticks to which the software products are compared. Procedures are the established
criteria to which the development and control processes are compared.
Standards and procedures establish the prescribed methods for developing software; the
SQA role is to ensure their existence and adequacy. Proper documentation of standards
and procedures is necessary since the SQA activities of process monitoring, product
evaluation and auditing rely upon clear definitions to measure project compliance.
Product evaluation is an SQA activity that assures standards are being followed.
Ideally, the first products monitored by SQA should be the project's standards and
procedures. SQA assures that clear and achievable standards exist and then evaluates
compliance of the software product to the established standards. Product evaluation
assures that the software product reflects the requirements of the applicable standard(s)
as identified in the Management Plan.
57
Process monitoring is an SQA activity that ensures that appropriate steps to carry out
the process are being followed. SQA monitors processes by comparing the actual steps
carried out with those in the documented procedures. The Assurance section of the
Management Plan specifies the methods to be used by the SQA process monitoring
activity.
A fundamental SQA technique is the audit, which looks at a process and/or a product in
depth, comparing them to established procedures and standards. Audits are used to
review management, technical, and assurance processes to provide an indication of the
quality and status of the software product.
The purpose of an SQA audit is to assure that proper control procedures are being
followed, that required documentation is maintained, and that the developer's status
reports accurately reflect the status of the activity. The SQA product is an audit report
to management consisting of findings and recommendations to bring the development
into conformance with standards and/or procedures.
58
• Software configuration authentication is established by a series of
configuration reviews and audits that exhibit the performance required by the
software requirements specification and the configuration of the software is
accurately reflected in the software design documents.
• Software development libraries provide for proper handling of software code,
documentation, media, and related data in their various forms and versions
from the time of their initial approval or acceptance until they have been
incorporated into the final media.
• Approved changes to baselined software are made properly and consistently in
all products, and no unauthorized changes are made.
• The test procedures are testing the software requirements in accordance with test
plans.
• The test procedures are verifiable.
• The correct or "advertised" version of the software is being tested (by SQA
monitoring of the CM activity).
• The test procedures are followed.
59
• Nonconformances occurring during testing (that is, any incident not expected in
the test procedures) are noted and recorded.
• Test reports are accurate and complete.
• Regression testing is conducted to assure nonconformances have been corrected.
Resolution of all nonconformances takes place prior to delivery.
Software testing verifies that the software meets its requirements. The quality of
testing is assured by verifying that project requirements are satisfied and that the
testing process is in accordance with the test plans and procedures.
• Reviewing PDR documentation and assuring that all action items are resolved.
Assuring the approved design is placed under configuration management.
60
Software Detailed Design Phase
SQA activities during the detailed design phase include:
• Assuring that approved design standards are followed.
• Assuring that allocated modules are included in the detailed design.
• Assuring that results of design inspections are included in the design.
• Reviewing CDR documentation and assuring that all action items are resolved.
Compatibility
. Computing environment that will require compatibly testing may include some or all of the
below mentioned elements:
• Users have the same visual experience irrespective of the browsers through which
they view the web application.
• In terms of functionality, the application must behave and respond the same way
across different browsers.
62
Software Compatibility testing: This is the evaluation of the performance of
system/application in connection with other software. For example: Software
compatibility with operating tools for network, web servers, messaging tools etc.
Compatibility testing can help developers understand the yardsticks that their
system/application needs to reach and fulfil, so as to get acceptance by intended users
who are already using some OS, network, software and hardware etc. It also helps the
users to find out which system will better fit in the existing setup they are using.
Certification testing falls within the range of Compatibility testing. Product Vendors do run
the complete suite of testing on the newer computing environment to get their application
certified for a specific Operating Systems or Databases.
Verification and Validation (V&V) is the process of checking that a software system meets
specifications and that it fulfils its expected purpose. It is normally part of the software
testing process of a project.
In other words, validation ensures that the product actually meets the user's needs, and
that the specifications were correct in the first place, while verification is ensuring that the
product has been built according to the requirements and design specifications.
Validation ensures that ‗you built the right thing‘. Verification ensures that ‗you built it
right‘. Validation confirms that the product, as provided, will fulfill its intended use.
63
Accreditation is the formal certification that a model or simulation is acceptable to be
used for a specific purpose.
• Verification is the process of determining that a computer model, simulation, or
federation of models and simulations implementations and their associated data
accurately represents the developer's conceptual description and specifications.
Classification of methods
Test cases
The Quality Assurance (QA) team prepares test cases for verification and these help to
determine if the process that was followed to develop the final product is right.
The Quality Certificate (QC) team uses a test case for validation and this will ascertain if the
product is built according to the requirements of the user. Other methods, such as reviews,
provide for validation in Software Development Life Cycle provided it is used early for
validation.
Verification and validation often is carried out by a separate group from the development
team; in this case, the process is called "Independent Verification and Validation ", or
IV&V.
3.4 Regulatory environment
The task is must to meet the compliance requirements of law regulated industries, which
is often guided by government agencies or industrial administrative authorities. FDA
even demands to validate software versions and patches.
3.5 Software Verification & Validation Model
‗Verification & Validation Model‘ is used in improvement of software project
development life cycle.
64
Fig 8 Verification and Validation Model
Source: http://www.buzzle.com/editorials/4-5-2005-68117.asp
A perfect software product is developed when every step is taken in right direction. That
is to say that ―A right product is developed in a right manner‖. Software Verification
Model helps to achieve this and also help to improve the quality of the software product.
The model will not only will not only makes sure that certain rules are followed at the time
of development of a software but will also ensure that the product that is developed fulfils
the required specifications. The result is that risk associated with any software project up
to certain level is reduced by helping in detection and correction of errors and mistakes,
which are unknowingly done during the development process.
Inspection:
Inspection involves a team of few people usually about 3-6 people. It usually led by a
leader, which properly reviews the documents and work product during various phases of
the product development life cycle. The product, as well as related documents is
presented to the team, the members of which carry different interpretations of the
presentation. The bugs that are discovered during the inspection are conveyed to the next
level in order to take care of them.
Walkthroughs:
In walkthrough inspection is carried out without formal preparation (of any presentation or
documentations). During the walkthrough, the presenter/author introduces the material to
all the participants in order to make them familiar with it. Though walkthroughs can help
in finding bugs, they are used for knowledge sharing or communication purpose.
Buddy Checks:
Buddy Checks does not involve a team rather, one person goes through the documents
prepared by another person in order. to find out bugs which the author couldn‘t find
previously.
65
verification Each activity ascertains that the product is developed correctly and every
requirement, every specification, design code etc. is verified.
Code Validation/Testing:
Unit Code Validation or Unit Testing is a type of testing, which the developers conduct in
order to find out any bug in the code unit/module developed by them. Code testing other than
Unit Testing can be done by testers or developers.
Integration Validation/Testing:
Integration testing is conducted in order to find out if different (two or more) units/modules
match properly. This test helps in finding out if there is any defect in the interface between
different modules.
Functional Validation/Testing:
This type of testing is meant to find out if the system meets the functional requirements. In
this type of testing, the system is validated for its functional behavior. Functional testing
does not deal with internal coding of the project, in stead, it checks if the system behaves
as per the expectations.
66