Software Engineering FULL ANSWER
Software Engineering FULL ANSWER
2. What is meant by the present software crisis? What are two of its main
symptoms?
3. Many modern applications change frequently-before they are
presented to the end user and then after the first version has been put
into use. Suggest a few ways to build software to stop deterioration
due to change.
There are few ways to stop deterioration due to change by several ways,
4. What is defined as Software? Explain your answer.
11. Do you design software when you "write" a program? What makes
software design different from coding? If a software design is not a
program (and it isn't), then what is it? How do we assess the quality of
a software design?
□
□
12. The process of Software Quality Management is known as what and
why?
26. Explain what is Unit Testing and who does the unit testing of
software?
27. Behavioural testing is part of what type of software testing? Explain
your answer.
◼ Black-Box testing.
28. How does a software process provide the framework from which a
comprehensive plan for software development can be established?
Explain.
29. Who defines the business issues that often have significant
influence on a software project?
30. What are the objectives of verification and validation of software?
--already answered.
35. Is software engineering applicable when Web Apps are built? If so,
how might it be modified to accommodate the unique characteristics
of Web Apps?
Yes, software engineering principles are applicable to the development of
web applications. To accommodate the unique characteristics of web
apps, modifications in software engineering practices can include:
Architecture:
- Emphasize scalable and distributed architectures to handle the
network-intensive nature of web applications.
- Consider client-server models and ensure responsiveness in the user
interface.
Approaches:
- Adopt web development frameworks and methodologies tailored for
the unique challenges of web applications.
- Incorporate client-side scripting for dynamic user experiences.
Tools:
- Utilize web-specific development tools, frameworks, and libraries for
efficient coding and testing.
- Integrate web-focused debugging and performance optimization tools.
Methods:
- Implement agile development practices for rapid iterations and
responsiveness to changing requirements.
- Prioritize user-centered design and usability testing for enhanced user
experience.
Processes:
- Incorporate continuous integration and continuous deployment
(CI/CD) pipelines for efficient and automated deployment of web
applications.
- Address the challenges of concurrent user access and ensure robust
session management.
Quality Focus:
- Place a strong emphasis on security, given the exposure to the internet
and potential vulnerabilities.
- Implement comprehensive testing strategies, including browser
compatibility and performance testing.
No, you should not try to complete the project by employing 50 developers
for a period of one month. This is because it is not possible to complete a
software project in such a short amount of time. The development of a
software project requires a lot of planning, testing, and debugging, which
cannot be done in such a short time frame. Additionally, employing 50
developers for a period of one month would be very expensive and would
not be a good use of resources.
A more realistic estimate for the completion of a software project of this
size would be 6-12 months. This would allow for adequate time for
planning, testing, and debugging. Additionally, it would be more cost-
effective to hire a smaller team of developers and spread the project out
over a longer period.
38. Why is it difficult to accurately estimate the effort required for
completing a project? Briefly explain the different effort estimation
methods that are available. Which one would be the most advisable to
use and why?
Top-down Estimation:
Breaks down the project into major components, estimating each
component's effort individually, and then aggregating the results.
Bottom-up Estimation:
Involves estimating the effort for individual tasks or units and
aggregating them to determine the overall project effort.
Three-Point Estimation:
Uses three estimates for each task: optimistic, pessimistic, and most
likely. It calculates the expected value, incorporating a range of
possibilities.
Analogous Estimation:
Utilizes historical data from similar past projects to estimate the
effort for the current project.
Parametric Estimation:
Employs statistical relationships between project variables (e.g., size,
complexity) to estimate effort.
Dot Voting:
Not an effort estimation method but a collaborative decision-making
technique. Participants use dots to vote on different options, helping
prioritize or select the most favored ones.
Each estimation method has its strengths and weaknesses, and the choice
depends on the project's characteristics, available data, and the level of detail
required. It is common to use a combination of these methods to arrive at more
accurate and reliable estimates, especially in complex projects with various
uncertainties.
39. What do you understand by testability of a
program? What are the activities carried out during testing a
software? Which one of these
activities take the maximum effort? Between the two
programs written by two different programmers to solve
essentially the same programming
problem, how can you determine which one is more testable?
41. The department of public works for a large city has decided to
develop a Web-based pothole tracking and repair system (PHTRS).
A description follows:
Citizens can log onto a website and report the location and severity of
potholes. As potholes are reported they are logged within a “public
works department repair system” and are assigned an identifying
number, stored by street address, size (on a scale of 1 to 10), location
(middle, curb, etc.), district (determined from street address), and
repair priority (determined from the size of the pothole). Work order
data are associated with each pothole and include pothole location
and size, repair crew identifying number, number of people on crew,
equipment assigned, hours applied to repair, hole status (work in
progress, repaired, temporary repair, not repaired), amount of filler
material used, and cost of repair (computed from hours applied,
number of people, material and equipment used). Finally, a damage fi
le is created to hold information about reported damage due to the
pothole and includes citizen’s name, address, phone number, type of
damage, and dollar amount of damage. PHTRS is an online system; all
queries are to be made interactively.
Draw a UML use case diagram PHTRS system. You’ll have to make a
number of assumptions about the manner in which a user interacts
with this system
Damage File:
- A damage file is created to store information about reported
damages due to potholes, including citizen's name, address, phone
number, type of damage, and dollar amount of damage.
Online System:
- The PHTRS is an online system, implying that all interactions and
queries are made through the web interface.
Query Interactivity:
- All queries are made interactively, indicating that users can
dynamically interact with the system to get real-time information.
Repair Status:
- Potholes can have different statuses such as "work in progress,"
"repaired," "temporary repair," or "not repaired."
Cost Computation:
- The cost of repair is computed from the hours applied, the
number of people, materials used, and equipment assigned.
Designed code:
Flow diagram:
43. You have been asked to develop a small application that
analyzes each course offered by a university and reports the
average grade obtained in the course (for a given semester). Write
a statement of scope that bounds this problem.
The application will be a web-based application that will allow users
to view the average grade obtained in each course offered by a
university for a given semester. The application will be limited to
the following features:
• Users will be able to select a university and a semester from a drop-down
menu.
• The application will then display a table of all courses offered by the university
for the selected semester, along with the average grade obtained in each
course.
• Users will be able to sort the table by course name, average grade, or any
other column.
• Users will be able to export the table to a CSV file.
The application will not be able to do the following:
The Placement Assistant software is a valuable tool for both students and
companies. It helps students to find jobs that are a good fit for their skills
and interests, and it helps companies to find qualified candidates for their
open positions. The software is also a valuable tool for schools, as it helps
them to track the progress of their students and to ensure that they are
successful in the placement process.
45. Draw a class diagram using the UML syntax to represent the
following. An engineering college offers B. Tech degrees in three
branches—Electronics, Electrical, and Computer Science &
Engineering. These B. Tech programs are offered by the respective
departments. Each branch can admit 120 students each year. For a
student to complete the B. Tech degree, he/she has to clear all the
30 core courses and at least 10 of the elective courses.
46. You have been appointed a project manager within an
information systems organization. Your job is to build an
application that is quite similar to others your team has built,
although this one is larger and more complex. Requirements have
been thoroughly documented by the customer. What team
structure would you choose and why? What software process
model(s) would you choose and why?
Team Structure
Given that the application is larger and more complex than previous projects, a
hybrid team structure would be most effective. This structure combines elements of
both traditional and agile approaches, providing the flexibility and adaptability
needed for complex projects while maintaining the structure and accountability
necessary for large-scale development.
Next part:
47. Using the architecture of a house or building as a metaphor,
draw comparisons with software
architecture. How are the disciplines of classical architecture
and the software architecture similar?
How do they differ?
Both classical architecture and software architecture involve the design and
planning of structures or systems. In classical architecture, the focus is on the
physical design and layout of buildings and other structures, while in software
architecture, the focus is on the design and organization of software systems. One
similarity between the two disciplines is that both require a clear understanding
of the requirements and goals of the project.
, there are also some key differences between the two disciplines. Classical
architecture is focused on the physical world and the materials used to construct
buildings, while software architecture is focused on the virtual world and the code
and technologies used to create software systems. Additionally, classical
architecture is typically more constrained by physical limitations, such as the laws
of physics and the availability of building materials, while software architecture
has more flexibility in terms of the technologies and tools that can be used.
48. Suppose a travel agency needs a software for automating its
book-keeping activities. The set
of activities to be automated are rather simple and are at
present being carried out manually. How
would you use the spiral model for developing this software?
The spiral model is a risk-driven software development process model that combines the
iterative development process model with elements of the Waterfall model. This makes it an
ideal choice for developing software for a travel agency, as the requirements for the software
are relatively simple and there is a low risk of unforeseen complications.
Here are the steps involved in using the spiral model for developing this software:
1. Define Objectives and Scope: Clearly identify the specific objectives of the software
and the scope of its functionality. This involves understanding the travel agency's
bookkeeping processes and identifying the areas where automation can provide the
most benefit.
2. Risk Assessment: Conduct a thorough risk assessment to identify potential risks
that could impact the project's success. This could include risks related to data
integrity, security, and integration with existing systems.
3. Prototype Development: Develop a prototype of the software to demonstrate its
functionality and gather user feedback. This allows for early validation of the
software's design and identification of any potential issues.
4. Evaluation and Planning: Evaluate the prototype and feedback from users to refine
the software's requirements and plan for the next iteration. This could involve adding
new features, modifying existing ones, or addressing any usability concerns.
5. Development and Integration: Develop the software based on the refined
requirements and integrate it with the travel agency's existing systems. This involves
ensuring data compatibility and seamless interaction with other business
applications.
6. Testing and Validation: Conduct rigorous testing to validate the software's
functionality, performance, and security. This may involve unit testing, integration
testing, system testing, and user acceptance testing.
7. Deployment and Maintenance: Deploy the software to the travel agency's
production environment and provide ongoing maintenance and support. This includes
addressing any bugs, implementing enhancements, and adapting to changing
business needs.
The spiral model's iterative nature allows for continuous refinement of the software and early
identification of potential issues. This makes it well-suited for developing software for a travel
agency, where requirements may evolve and unforeseen challenges may arise.
The pie chart shows the effort distribution in an iterative waterfall model. The largest portion
of effort is spent on implementation (40%), followed by planning (20%), design (10%), and
testing (10%).
Here is a breakdown of each phase:
• Planning: This phase involves gathering and analysing requirements, defining the
scope of the project, and developing a project plan.
• Design: This phase involves creating a detailed blueprint for the software, including
the user interface, database, and system architecture.
• Implementation: This phase involves writing the code, building the software, and
integrating the different components.
• Testing: This phase involves testing the software to ensure that it meets the
requirements and functions as intended.
The iterative waterfall model is a hybrid approach that combines the sequential steps of the
traditional waterfall model with the flexibility of iterative development. In this model, the
software is developed in a series of iterations, with each iteration focusing on a specific set
of requirements. The results of each iteration are used to refine the requirements and
improve the software design.
52.Identify five reasons as to why the customer requirements may change after
the requirements phase is complete and the SRS document has been signed
off.
There are many reasons why customer requirements may change after the
requirements phase is complete and the SRS document has been signed off.
Here are five of the most common reasons:
1. The customer may not have been fully aware of their needs at
the start of the project. This can happen for a few reasons, such as
the customer not having enough time to research their needs, or
the customer not being able to articulate their needs clearly.
2. The customer's needs may have changed since the start of the
project. This can happen due to customer changing their mind
about what they want, or the customer's environment changing.
3. The customer may have discovered new information that
changes their requirements. This can happen due to several
factors, such as the customer conducting new research, or the
customer learning about new technologies.
4. The customer may have been given incorrect information by the
development team. This can happen due to a few factors, such as
the development team misunderstanding the customer's
requirements, or the development team making a mistake in the
SRS document.
5. The development team may have made a mistake in the SRS
document. This can happen due to several factors, such as the
development team misunderstanding the customer's
requirements, or the development team making a typo in the SRS
document.
These are just five of the most common reasons why customer requirements
may change. There are many other reasons that can cause customer
requirements to change.
53.What do you understand by the “99 percent complete” syndrome that
software project managers sometimes face? What are its underlying
causes? What problems does it create for project management?
What are its remedies?
There are a few remedies for the "99% complete" syndrome, including:
• More accurate estimation: Use better techniques for estimating the amount of
work that is required to complete a project.
• Smaller tasks: Break down tasks into smaller, more manageable pieces.
• Improved reporting: Provide stakeholders with regular updates on the project's
progress.
• Realistic expectations: Manage stakeholders' expectations by being realistic
about the project's timeline and scope.
By taking steps to address the underlying causes of the "99% complete" syndrome,
software project managers can help to avoid the problems that it can create.
• It allows for risk analysis and risk handling at every phase of the development
process.
• It is flexible and can be adapted to the specific needs of the project.
• It produces high-quality software that meets the needs of the users.
Here are some of the disadvantages of using the spiral model:
Project management activities end formally with the closure of the project, typically after the
maintenance phase. However, the project manager may continue to track post-
implementation metrics and user feedback to gather insights for future projects.
The key project management activities throughout the SDLC:
Bottom-up Estimation: This method involves estimating the effort for each
individual task and then summing up the estimates to get the overall project
effort. This method is more detailed but can be time-consuming and may not
consider dependencies between tasks.
→
The most advisable estimation method depends on the specific project and its
characteristics. For early-stage projects with incomplete requirements, expert
judgment or top-down estimation may be sufficient. For more mature projects
with well-defined requirements, bottom-up or parametric estimation may be
more appropriate.
Learning curve: When developers are first starting to work on a project, they
need to spend time learning the codebase and the tools that are being used.
This can be a time-consuming process. When developers have more time to
work on a project, they can spend less time on the learning curve and more
time on developing the product.
→
The COCOMO (Constructive Cost Model) estimation model primarily focuses on
the effort and cost associated with software development based on size and
other factors. However, there are several elements of the cost that are not
explicitly included in the COCOMO estimation model. These elements include:
• Hardware Costs
• Training Costs
• Rework Costs
• Overhead Costs
• External Dependencies
• Legal and Licensing Costs
• Integration Costs
• Cost of Delay
• Market Dynamics:
• Post-Implementation Costs
→
Additional factors to consider in the buy/build decision:
Timeframe: Building the software in-house may take time, and if there are time
constraints, buying the off-the-shelf product may be more suitable.
Training: If the off-the-shelf product requires less training for users, it might be
a factor in the decision.
62.List three common types of risks that a typical software project might suffer
from. Explain how you can identify the risks that your project is susceptible
to. Suppose you are the project manager of a large software development
project, point out the main steps you would follow to manage various risks
in your software project.
Three common types of risks that a typical software project might suffer from
are:
1. Technical Risks:
- Technical risks pertain to challenges related to the software development
process, such as programming, architecture, and infrastructure issues.
- Identifying technical risks can involve conducting a thorough technical analysis
of the project, considering factors like the complexity of the software, the
experience of the development team, and the technology stack being used.
- Peer reviews, code inspections, and architectural reviews can help identify
potential technical risks. These can be flagged through discussions with the
development team, especially when they are aware of potential challenges
based on their experience.
2. Schedule Risks:
- Schedule risks are associated with project timelines and deadlines. Delays can
occur due to various reasons, including scope changes, resource constraints, or
unforeseen obstacles.
- To identify schedule risks, project managers should create a detailed project
schedule and consider factors like resource availability, dependencies between
tasks, and historical data from previous projects.
- Regular project status meetings and progress tracking can help in identifying
schedule risks early, enabling proactive mitigation strategies.
3. External Risks:
-
External risks are factors that originate outside the project but can impact its
success. These may include changes in regulations, economic conditions,
market trends, third-party dependencies, and geopolitical events.
-To identify external risks, conduct a thorough analysis of the project's external
environment. Stay informed about relevant external factors and dependencies,
and consider their potential impact on the project. Regular monitoring and
communication with external stakeholders can help in identifying and
managing these risks.
1. Risk Assessment:
- Conduct a comprehensive risk assessment at the beginning of the project.
This involves brainstorming potential risks with our project team and
stakeholders. Consider the project's objectives, scope, constraints, and
dependencies to identify a wide range of risks.
2. Risk Analysis:
- After identifying potential risks, perform a qualitative and quantitative risk
analysis to prioritize them based on their impact and probability. This helps we
focus on the most critical risks that require attention.
3. Risk Mitigation Planning:
- For the high-priority risks, develop risk mitigation and contingency plans.
Define strategies to reduce the impact and likelihood of these risks, and outline
how the team will respond if they do occur.
4. Regular Monitoring:
- Throughout the project's lifecycle, continuously monitor and reassess the
identified risks. New risks may emerge, and the significance of existing risks
may change. Regular risk reviews and updates to the risk management plan are
crucial for staying proactive in risk management.
5. Communication:
- Maintain open and transparent communication with project stakeholders.
Make them aware of identified risks, mitigation plans, and their potential
impact on the project. Effective communication can help manage stakeholder
expectations and gain their support in addressing risks.
6. Risk Registers: Maintaining a centralized list of identified risks, including
their descriptions, potential consequences, and risk owners, can help in
tracking and managing risks throughout the project.
7. Historical Data: Reviewing data from past projects and industry benchmarks
can provide insights into common risks that software projects typically face.
8. Expert Opinions: Seek input from subject matter experts, experienced
project managers, and other knowledgeable individuals to identify risks specific
to the project domain.
II) Suppose you are the project manager of a large software development
project, point out the main steps you would follow to manage various risks in
your software project.
63.Consider a software project with 5 tasks T1-T5. Duration of the 5 tasks (in
days) are 15, 10, 12, 25 and 10, respectively. T2 and T4 can start when T1 is
complete. T3 can start when T2 is complete. T5 can start when both T3 and
T4 are complete. When is the latest start date of the task T3? What is the
slack time of the task T4?
64.What is the difference between a revision and a version of a software
product? What do you understand by the terms change control and version
control? Why are these necessary? Explain how change and version control
are achieved using a configuration management tool.
→
Change and version control are necessary to ensure the quality and integrity of
a software product. They also make it easier to manage changes to a software
product and to troubleshoot problems.
Explain how change and version control are achieved using a configuration
management tool:
A configuration management tool is a tool that can be used to implement
change and version control. A configuration management tool typically
provides a central repository for storing all changes to a software product. The
tool also provides features for tracking changes, approving changes, and
reverting to previous versions.
Here is an example of how change and version control can be achieved using a
configuration management tool:
• A developer makes a change to a software product.
• The developer checks the change into the configuration management
tool.
• The change is reviewed by another developer.
• The approved change is merged into the main codebase.
• A new version of the software product is built.
• The new version of the software product is released to users.
If a problem is found with the new version of the software product, the team
can use the configuration management tool to revert to a previous version.
65.What are the different project parameters that affect the cost of a project?
What are the important factors which make it hard to accurately estimate
the cost of software projects? If you are a project manager bidding for a
product development to a customer, would you quote the cost estimated
using COCOMO as your price bid? Explain your answer.
→
Different Project Parameters Affecting Project Cost:
- Scope:
The size and complexity of the project scope directly influence the cost. Larger
scopes generally require more resources and time.
- Timeline:
The duration allocated for project completion affects costs. Shorter timelines
may require more resources or overtime, impacting expenses.
- Resources:
Availability and cost of skilled personnel, technology, and tools play a crucial
role in project cost estimation.
Some project parameters that can affect project cost include:
There are several factors that can make it difficult to accurately estimate the
cost of a software project:
• Project scope
• Team skill levels
• Historical data availability
• Project complexity
• Changing requirements
• Technical issues
• External dependencies
• Software house experience
• Requirements specification
• Expected results
Explanation:
→
→
According to the question maximum array size = 100, and size variable used to
provide size as per the user.
Code:
#include <stdio.h>
if (arr[mid] == target) {
return mid; // Target found, return index
} else if (arr[mid] < target) {
left = mid + 1; // Target is in the right half
} else {
right = mid - 1; // Target is in the left half
}
}
// Test cases
int testCases[] = {5, 10, 15, 20, 25};
int numTestCases = sizeof(testCases) / sizeof(testCases[0]);
if (result != -1) {
printf("Target %d found at index %d.\n", target, result);
} else {
printf("Target %d not found in the array.\n", target);
}
}
}
// Main function
int main() {
testBinarySearch();
return 0;
}
This code defines a binarySearch function that takes an array, its size, and the target value as
parameters and returns the index of the target value if found or -1 otherwise. The
testBinarySearch function tests the binary search function with a set of test cases, and the
results are printed to the console.
→
Here is that existing code????? (idk)
69. Some people say that “variation control is the heart of quality control.” Since every
program that is created is different from every other program, what are the variations
that we look for an That is a different kind of “quality control”. The quote relates to QC in
stuff that is mass-produced.
The statement "variation control is the heart of quality control" emphasizes the importance
of managing and minimizing variations in processes to ensure consistent and high-quality
outcomes. While this phrase is often associated with manufacturing and mass production, it
can be applied to software development and other fields as well.
In the context of software development, the variations that are crucial to control for quality
include:
1. Code Consistency:
- Coding Standards: Ensuring that code follows a consistent style and adheres to coding
standards helps improve readability and maintainability. Tools like linters can be used to
enforce coding standards.
2. Testing Variations:
- Test Coverage: Ensuring comprehensive test coverage helps identify and address potential
issues across different parts of the code. Variations in test coverage can lead to untested or
under-tested code.
3. Version Control:
- Versioning: Using version control systems like Git helps manage variations in the
codebase over time. It allows developers to track changes, collaborate effectively, and roll
back to previous versions if needed.
8. Security:
- Code Security: Implementing secure coding practices and regularly assessing the code for
security vulnerabilities helps control variations in potential security risks.
70. Design a project database (repository) system that would enable a software engineer to
store, cross reference, trace, update, change, and so forth all important software
configuration items. How would the database handle different versions of the same
program?
1. Project Table:
- ProjectID (Primary Key)
- ProjectName
- Description
- StartDate
- EndDate
2. Software Configuration Item (SCI) Table:
- SCIID (Primary Key)
- ProjectID (Foreign Key referencing Project Table)
- FileName
- FilePath
- FileType
- Version
- Author
- DateCreated
3. Cross-Reference Table:
- CrossReferenceID (Primary Key)
- SCIID1 (Foreign Key referencing SCI Table)
- SCIID2 (Foreign Key referencing SCI Table)
- RelationshipType
- Description
Functionality:
1. Storing:
- Software Configuration Items are stored in the SCI Table with details like file name, path,
type, version, author, etc.
2. Cross-Referencing:
- The Cross-Reference Table allows you to establish relationships between different SCIs,
helping to track dependencies and connections.
3. Tracing:
- Traceability can be achieved through the Project and SCI tables. Each SCI is associated
with a specific project, and you can trace changes and dependencies using the Cross-
Reference Table.
4. Updating/Changing:
- Changes to SCIs are logged in the Change Log Table, recording details such as the date of
change, type of change, and the author. This provides an audit trail for all modifications.
Additional Considerations:
• Access Control
• Integration with Version Control System
• User Interface
• Automation
• Backup and Recovery
Handling different versions of the same program in a database involves structuring the
database to accommodate versioning information and establishing relationships between
different versions. Below are considerations for managing different versions of software in
the database:
SCI Table:
- SCIID (Primary Key)
- ProjectID (Foreign Key referencing Project Table)
- FileName
- FilePath
- FileType
- VersionNumber
- Author
- DateCreated
2. Linking Versions:
- Create a field, e.g., `ParentSCIID`, in the SCI Table to link versions of the same software
configuration item. This field can reference the SCIID of the previous version.
SCI Table:
- SCIID (Primary Key)
- ParentSCIID (Foreign Key referencing SCIID in the same table)
- ProjectID (Foreign Key referencing Project Table)
- FileName
- FilePath
- FileType
- VersionNumber
- Author
- DateCreated
Versioning Strategies:
1. Sequential Versioning:
- Each new version gets a sequential version number (e.g., 1.0, 1.1, 1.2, ...).
2. Semantic Versioning:
- Use a versioning scheme that follows semantic versioning principles (e.g.,
MAJOR.MINOR.PATCH).
3. Branching:
- Implement version branching if major changes are occurring concurrently (e.g.,
development branch, stable branch).
Example Query:
sql
SELECT *
FROM SCI
WHERE FileName = 'YourFileName' ORDER BY VersionNumber DESC;
71. Based your knowledge and your own experience, make a list of 10 guidelines for
software people to work to their full potential.
72. You have been given the responsibility for improving the quality of software across your
organization. What is the first thing that you should do? What is next?
→
First Thing to do:
Test early and test often with automation: To improve software quality, it is
necessary to Test early and Test often. Early testing will ensure that any defects do not
snowball into larger, more complicated issues. The bigger the defect, the more expensive it
becomes to iron out any issues.
The earlier you get your testers involved, the better. It is recommended to involve testers
early in the software design process to ensure that they remain on top of any problems or
bugs as they crop up, and before the issues grow exponentially which generally makes it
harder to debug.
Testing often requires a focus on early adoption of the right automated testing discipline.
Start by automating non-UI tests initially then slowly increasing coverage to UI based tests
when the product stabilises. If your application utilises Webservices/APIs then automate
these tests to ensure all your business rules and logic are tested
→
NEXT STEPS:
After the completion of testing the software that is required to improve, the steps that
are important ~
a. Implement Quality Control: Testers can monitor quality controls and create awareness in
partnership with developers to ensure standards are continually being met. Quality control
starts from the beginning, which is an ongoing process throughout the delivery.
b. Echo the importance of quality assurance through the entire software development
process: Quality Assurance is a governance provided by the project team that instils
confidence in the overall software quality. Assurance testing oversees and validates the
processes used to deliver outcomes have been tracked and are functioning. Testing should be
repeated as each development element is applied. Think of it as layering a cake. After every
layer is added, the cake should be tasted and tested again.
c. Encourage Innovations:
It is important that testing structures and quality measures are in place, however, there should
always be room for innovation. A great way to allow for innovation is to automate testing
where possible to minimise time spent on controls.
e. Risk Management: A risk register is a fantastic management tool to manage risks. A risk
register is more synonymous with financial auditing; however, it is still a vital element in
software development.
74. What are the three models provided by COCOMO 2 to arrive at increasingly accurate
cost estimations.
76. While using COCOMO, which one of the project parameters is usually the first to be
estimated by a project manager?
This Basic COCOMO Model is ideal for early-stage estimates. It estimates effort based
solely on the size of the code to be developed. Basic COCOMO assumes a linear
relationship between project size and effort. The central parameter here is the Source
Lines of Code (SLOC).
77. Name the four elements that exist when an effective SCM system is implemented.
The four basic requirements for a software configuration management (SCM) system are:
Identification, Control, Audit, Status accounting.
78. How is an application program’s “version” different from its “release”?
A version is a concrete and specific software package. It is identified by a version number,
such as 1.0. Versions are created for internal development or testing and are not
intended for release to customers, whereas a release is the process of publishing a
software. It is the distribution of the final version of an application. A release may be
either public or private.
79. If a project is already delayed, then will it help by adding manpower to complete it at the
earliest?
No, adding manpower to a delayed project will not help in completing it at the earliest. In fact, it
may even delay the project further. This is because adding more people to a project increases the
amount of communication and coordination required, which can lead to delays. Additionally,
new team members may need time to get up to speed on the project, which can also lead to
delays.
80. What may be a reason as to why COCOMO project duration estimation may be more accurate
than the duration estimation obtained by using the expression, duration = size/productivity?
COCOMO's comprehensive approach, considering project complexity, parameterization,
historical data, cost factors, and expert judgment, contributes to its potential for more accurate
project duration estimations compared to a simple formula like "duration = size/productivity."
81. The use of resources such as personnel in a software development project varies with time.
What is the distribution that is the best approximation for modelling the personnel requirements
for a software development project?
The best approximation for modeling personnel requirements in a software development project
is often achieved through a Normal (Gaussian) Distribution. This distribution is suitable for
capturing the variability in resource utilization over time, accommodating the natural
fluctuations in workload and allowing for a realistic representation of personnel requirements in
different project phases.
82. Based on what criteria do we typically define Parametric estimation models?
Parametric estimation is a statistical technique that uses historical and statistical data to calculate
the time, cost, and resources needed for a project. It uses mathematical models to generate
estimates based on input values. Parametric estimation models are based on the following
criteria: Historical data, Statistical techniques, Mathematical equations, Research, Industry-
specific data, Expertise.
83. How do you correctly characterize the accuracy of project estimations carried out at different
stages of the project life cycle?
Project estimation accuracy varies across different stages of the project life cycle. In the initial
stages, estimates may be less accurate due to limited information and uncertainties. As the
project progresses, with more details available and risks mitigated, estimates become more
precise. Regularly reassess and refine estimates to adapt to changing conditions. Use historical
data and feedback loops to enhance future estimations.
Acknowledge that accuracy may improve as the project advances, and factor in contingencies.
Transparent communication about estimation uncertainties fosters stakeholder understanding.
Employing agile methodologies can facilitate iterative adjustments, enhancing adaptability and
overall estimation accuracy throughout the project life cycle.
84. Which model was used during the early stages of software engineering, when prototyping of
user interfaces, consideration of software and system interaction, assessment of performance,
and evaluation of technology maturity were paramount?
→ Application composition model
85. Suppose you are the project manager of a software project. Based on your experience with three
similar past projects, you estimate the duration of the project, i.e., 5 months. What of the
various types of estimation techniques have you made use of? Name the technique and explain
your reason.
The estimation technique applied in this scenario is analogous estimating. This method involves
drawing parallels between the current project and past projects with similar characteristics. By
leveraging data from three previous software projects, I can extrapolate and estimate the duration
for the new project. Analogous estimating is efficient when historical information is relevant and
accurate, providing a quick and relatively simple way to gauge project timelines. However, it assumes
that the similarities between projects are significant enough to warrant a reliable estimate, and
adjustments may be necessary for any unique factors in the current project that differ from the past
ones.
86. Which version of COCOMO states that once requirements have been stabilized, the basic
software architecture has been established?
→ Early design stage model
87. Why do you think some late projects becoming later due to addition of manpower?
→Brooks' Law states, “Adding manpower to a late software project makes it later,” means when a
person is added to a project team, and the project is already late, the project time is longer, rather
than shorter.
88. What is a Functional Requirement?
→Functional requirements are the details and instructions that dictate how software performs and
behaves. Software engineers create and apply functional requirements to software during the
development stages of a project.
Some common functional requirements include: Business requirements, administrative protocols,
User preferences, System requirements, Authentication, Legal requirements, Usability, Reliability.
89. Which type of software development team has no permanent leader?
A Democratic decentralized software development team has no permanent leader. In this type of
team, software developers temporarily coordinate with each other to complete tasks.
90. Which software development model is not suitable for accommodating any change?
→The Waterfall Model
91. What is system software and what is applications software?
System software refers to a collection of programs that manage and operate the computer
hardware, facilitating the execution of application software. It includes the operating system,
device drivers, and utility software, serving as an interface between hardware and user
applications. In contrast,
Applications software encompasses programs designed to perform specific tasks for end-users.
Examples include word processors, web browsers, and accounting software. Unlike system
software, applications software focuses on meeting user needs and enhancing productivity.
Together, system and applications software form the essential components of a computer
environment, enabling its functionality and supporting diverse user activities.
92. Software quality assurance consists of which functions of management?
→Software quality assurance consists of the auditing and reporting functions of management.
93. Give a brief description of Work Breakdown Structure.
→A work breakdown structure is a diagram that shows the connections between the objectives,
measurable milestones, and deliverables (also referred to as work packages or tasks).
94. What is an SPMP document? What is its utility?
An SPMP document stands for Software Project Management Plan. It is a comprehensive document
that outlines the approach and procedures for managing a software project throughout its life cycle.
The utility of an SPMP lies in providing a structured framework for project managers and
stakeholders. It defines project scope, objectives, timelines, resource allocation, risk management,
communication strategies, and quality assurance measures. By serving as a reference guide, the SPMP
enhances communication, minimizes risks, and ensures that the project is executed in a systematic and
controlled manner. It is a crucial tool for effective planning, execution, and monitoring of software
development projects.
95. What does an Activity Network show us?
→An Activity Network Diagram is a diagram of project activities that shows the sequential
relationships of activities using arrows and nodes. An activity network diagram tool is used
extensively in and is necessary for the identification of a project's critical path.
96. What does CPM refer to? How is it useful for software project management?
The critical path method (CPM) is a technique where you identify tasks that are necessary for project
completion and determine scheduling flexibilities.
The critical path method is a reliable way for project managers to budget time and allocate resources.
Advantages of CPM include improved accuracy and flexibility in scheduling, clearer communication
between project managers and stakeholders, easier task prioritization, and more.
97. What is PERT and its utility in software project management?
→ A program evaluation review technique (PERT) chart is a graphical representation of a
project's timeline that displays all the individual tasks necessary to complete the project. Pert
enables program managers to plan the movement toward program objectives and monitor the
progress made toward obtaining objectives at any point in time. A pert analysis identifies a
network of activities, their consequences, and the time needed for each activity.
98. What are Gantt Charts?
→A Gantt chart is a project management tool assisting in the planning and scheduling of projects
of all sizes, although they are particularly useful for simplifying complex projects.
99. Who are the users of SRS (Software Requirements Specification)?
The primary users of Software Requirements Specification (SRS) are software developers and
quality assurance (QA) teams. Developers rely on the SRS to understand and implement the
functional and non-functional requirements of the software, while the QA team uses it to develop
test cases and ensure the software meets the specified requirements.
100. Briefly state the main objective of ‘code walkthrough’?
The main objective of a code walkthrough is to identify and address issues in the source code
through a collaborative and systematic review process. This activity aims to ensure code quality,
adherence to coding standards, and the identification of potential bugs or logical errors. During a
walkthrough, team members inspect the code line by line, focusing on correctness,
maintainability, and adherence to design specifications. This collaborative review helps in
knowledge sharing among team members, improves the overall quality of the codebase, and
provides an opportunity to catch and rectify issues early in the development process.
101. Briefly state the goals of software verification and validation.
The goals of software verification and validation (V&V) are to ensure that a software system meets
specified requirements and functions correctly.
Verification aims to confirm that the software is designed and implemented according to its
specifications. It involves reviews, inspections, and other static methods.
Validation, on the other hand, focuses on assessing the dynamic behavior of the software during
execution to ensure it satisfies user needs.
Together, verification and validation activities aim to enhance software quality, reduce the
likelihood of defects, and provide confidence that the software functions as intended, meeting both
technical and user requirements.
102. Name the different types of testing carried out after the development team has handed over the
software to the testing team.
After the development phase, testing activities include
Integration Testing for component interactions,
System Testing to assess the entire system,
Acceptance Testing for user approval,
Regression Testing to prevent new issues, and
Performance Testing to evaluate system responsiveness and
scalability before release.
103. Statement coverage is usually not considered to be a satisfactory testing of a program unit.
Briefly explain the reason behind this.
Statement coverage, while a valuable metric, is not always considered satisfactory for testing a
program unit because it does not guarantee comprehensive testing of all possible scenarios.
Achieving 100% statement coverage means executing every line of code at least once, but it does
not ensure that all logical branches, conditions, and potential errors are exercised. Therefore, a
program may have high statement coverage yet contain undetected defects or untested scenarios.
For thorough testing, additional criteria like branch coverage, condition coverage, and path
coverage are often considered to ensure a more complete assessment of the code's behavior.
104. What is the difference between black-box testing and white-box?
105. What is the difference between internal and external documentation?
Internal documentation is helpful for the firm's managers, directors, or other internal parties for
making significant business decisions. External documentation helps outsiders, such as vendors,
investors, etc., decide to offer funds/material to the business.
106. What is meant by structural complexity of a program? Define a metric for measuring the
structural complexity of a program.
The structural complexity of a program refers to the intricacy and interconnectedness of its
components, such as functions, modules, or classes. It assesses the program's internal complexity
based on its structure and organization.
One metric for measuring structural complexity is the Cyclomatic Complexity. Cyclomatic
Complexity is calculated using the formula:
[ M = E - N + 2P]
where:
- (M) is the Cyclomatic Complexity,
- (E) is the number of edges in the control flow graph,
- (N) is the number of nodes in the control flow graph,
- (P) is the number of connected components in the graph.
A higher Cyclomatic Complexity suggests increased code complexity and may indicate a higher
likelihood of defects or the need for additional testing.
107. What do you understand by positive and negative test cases?
1. Positive Test Cases:
- Purpose: Positive test cases verify that the system functions as expected when provided with valid
inputs or under normal operating conditions.
- Example: Testing a login functionality with correct username and password to ensure the system
grants access.
108. What is the difference between a coding standard and a coding guideline?
• Non-Functional Requirements:
1. Performance: The clock should not significantly impact the performance of the
computer it runs on.
2. Reliability: The clock should be reliable and not crash or experience unexpected
behaviour.
3. Usability: The clock should be easy to use and understand for all users.
4. Security: The clock should be secure and not susceptible to hacking or malware.
5. Maintainability: The clock should be easy to maintain and update.
122. Identify any inconsistencies, anomalies, and incompleteness that are present in the following
requirements that were gathered by interviewing the clerks of the ECE department for developing
an academic automation software (AAS): “The CGPA of each student is computed as the average
performance for the semester. The parents of all students having poor performance are mailed a
letter informing about the poor performance of their ward and with a request to convey a warning
to the student that the poor performance should not be repeated.”
An analysis of the inconsistencies, anomalies, and incompleteness present in the requirements for
developing an academic automation software (AAS):
Inconsistencies:
1. Ambiguity in "poor performance": The phrase "poor performance" is subjective and lacks
a clear definition. What constitutes "poor performance" for one student might not be the same
for another. This ambiguity could lead to inconsistencies in identifying students who require
parental notification.
2. Lack of criteria for parental notification: The requirement specifies that parents of "all
students having poor performance" should be notified. However, it does not define the criteria
for determining which students fall under this category. Leaving this undefined could lead to
inconsistencies in notification practices.
Anomalies:
1. No mention of student involvement: The requirement focuses on notifying parents about
their ward's performance but does not mention any direct communication with the student
concerned. This could lead to a situation where parents are informed about the student's
performance without the student being directly involved in the process.
2. Potential for parental overreaction: Notifying parents about their child's "poor
performance" without providing context or considering individual circumstances could lead to
unnecessary stress or anxiety for the parents and the student.
Incompleteness:
1. No definition of semester performance: The requirement mentions that CGPA is computed
as the average performance for the semester, but it does not define how semester performance
is calculated. This incompleteness could lead to inconsistencies in CGPA calculations.
2. Lack of follow-up or support measures: The requirement only mentions notifying parents
about poor performance but does not specify any follow-up or support measures for the
student. Leaving this undefined could limit the effectiveness of the notification process.
123. In the context of software development, distinguish between analysis and design with respect
to intention, methodology, and the documentation technique used.
Analysis:
Software analysis is the process of understanding the problem and requirements for a software
system. It involves gathering information from stakeholders, identifying problems with the current
system, and defining the requirements for the new system. The goal of analysis is to create a complete
and accurate understanding of the problem and the needs of the users.
Design:
Software design is the process of creating a solution to the problem defined in the analysis phase. It
involves creating models, diagrams, and specifications that describe the architecture, components, and
interfaces of the software system. The goal of design is to create a system that is efficient, reliable,
maintainable, and scalable.
Documentation:
The documentation produced during analysis and design is typically used to communicate the
requirements and design decisions to stakeholders. The requirements document is a formal document
that defines the functional and non-functional requirements of the software system. The design
document is a more technical document that describes the architecture, components, and interfaces of
the software system.
124. A customer asks you to complete a project, whose effort estimate is E, in time T. How will
you decide whether to accept this schedule or not?
Deciding whether to accept a project schedule is a critical decision that should be based on a thorough
assessment of various factors. Here is a comprehensive approach to evaluating the feasibility of
accepting a project schedule:
1. Evaluate project scope and complexity: Carefully review the project scope to fully
understand the deliverables, tasks, and their interdependencies. Assess the complexity of the
project, considering factors such as the novelty of the technology, the level of customization,
and the integration with existing systems.
2. Analyse historical data: Gather historical data from previous projects, particularly those of
similar size and complexity. Analyse the average effort and duration of similar projects to
establish a baseline for comparison with the proposed schedule.
3. Consider team expertise and availability: Assess the expertise and availability of your team
members. Evaluate their experience in handling projects of similar scope and complexity.
Determine if there are any resource constraints or potential conflicts with existing
commitments.
4. Conduct risk assessment: Identify potential risks that could impact the project's timeline or
resource requirements. Assess the likelihood and severity of each risk and develop mitigation
strategies to address them.
5. Evaluate project dependencies: Determine if the project has any dependencies on external
factors or other projects. Analyse the potential impact of these dependencies on the proposed
schedule.
6. Assess communication and collaboration: Evaluate the communication and collaboration
channels within the team and with the client. Ensure clear expectations and regular
communication are established to minimize misunderstandings and delays.
7. Consider budget constraints: Evaluate the project's budget and the potential impact of the
proposed schedule on financial feasibility. Assess if the schedule aligns with the allocated
budget and if it can be completed within the financial constraints.
8. Negotiate with the client: If necessary, negotiate with the client to adjust the schedule or
scope of the project to reach a mutually agreeable timeframe that is realistic and achievable.
9. Document the decision: Once a decision is made, clearly document the rationale behind
accepting or rejecting the proposed schedule. This documentation serves as a reference for
future projects and helps maintain transparency with the client.
By carefully considering these factors and following a structured decision-making process, you can
make informed choices about accepting or rejecting project schedules, ensuring project success and
client satisfaction.
125. List some practices that you will follow while developing a software system using an object-
oriented approach to increase cohesion and reduce coupling.
Sure, here are some practices to follow while developing a software system using an object-oriented
approach to increase cohesion and reduce coupling:
Increase Cohesion
• Follow the Single Responsibility Principle (SRP): Each class should have a single well-
defined responsibility. This means that a class should only be responsible for one thing, and
all its methods should be related to that responsibility.
• Group related methods together: Methods that are related to each other should be grouped
together within a class. This will make the class easier to understand and maintain.
• Avoid unnecessary methods: Methods that are not needed by the class should be removed.
This will help to reduce the size of the class and make it more focused.
• Use meaningful method names: Method names should be descriptive and clearly indicate
what the method does. This will make the code easier to read and understand.
• Use private data members: Data members that are only used by a single class should be
declared as private. This will help to protect the data from being accessed by other classes.
Reduce Coupling
• Favor composition over inheritance: Composition is a way to create a new class by
combining the functionality of existing classes. Inheritance is a way to create a new class that
inherits the functionality of an existing class. Composition is generally preferred over
inheritance because it allows for more flexibility and reusability.
• Use interfaces to define dependencies: Interfaces can be used to define the dependencies
between classes. This will make it easier to change the implementation of a class without
affecting the classes that depend on it.
• Use dependency injection: Dependency injection is a technique for providing a class with its
dependencies. This can be done through constructor injection, method injection, or property
injection. Dependency injection can help to reduce coupling between classes because it makes
it easier to test and reuse classes.
• Minimize global variables: Global variables can lead to tight coupling between classes
because they can be accessed by any class in the program. Global variables should be avoided
whenever possible.
• Use loose coupling mechanisms: There are a number of loose coupling mechanisms that can
be used to reduce coupling between classes, such as message passing and polymorphism.
By following these practices, you can develop software systems that are more cohesive and less
coupled. This will lead to code that is easier to understand, maintain, and reuse.
126. Suppose that the last round of testing, in which all the test suites were executed but no faults
were fixed, took 7 full days (24 hours each). And in this testing, the number of failures that were
logged every day were: 2, 0, 1, 2, 1, 1, 0. If it is expected that an average user will use the
software for two hours each day in a manner that is similar to what was done in testing, what is
the expected reliability of this software for the user?
127. Draw a schematic diagram to represent the iterative waterfall model of software development.
On your diagram represent the following: (a) The phase entry and exit criteria for each phase. (b)
The deliverables that need to be produced at the end of each phase
Phase 1: Planning
Entry Criteria - Project Goals are defined
- Requirements are identified.
Exit Criteria - Project plans are approved
- Initial Risk assessments are complete
Deliverables - Project plan
- Initial Risk assessments
Phase 2: Analysis
Entry Criteria - Approved Project Plan
Exit Criteria - Detailed Requirements document
- User Interface design
Deliverables - Detailed Requirements document
- User Interface design
Phase 3: Design
Entry Criteria - Approved analysis phase deliverables
Exit Criteria - System architecture design
- Detailed design specifications
Deliverables - System architecture design
- Detailed design specifications
Phase 4: Implementation
Entry Criteria - Approved Design phase deliverables
Exit Criteria - Working Software Prototype
- Unit test
Deliverables - Working Software Prototype
- Unit test
Phase 5: Testing
Entry Criteria - Completed Implementation phase
Exit Criteria - Integrated Software System
- Verified Requirements
- System Tests
Deliverables - Integrated Software System
- Verified Requirements
- System Tests
Phase 6: Planning
Entry Criteria - Approved Testing Phase Deliverables
Exit Criteria - Deployed Software System
- User Training Completed
- User acceptance training results
Deliverables - Deployed Software System
- User Training Completed
- User acceptance training results
128. What are the objectives of the feasibility study phase of software development? Explain the
important activities that are carried out during the feasibility study phase of a software
development project. Who carries out these activities? Mention suitable phase entry and exit
criteria for this phase.
The feasibility study phase is a critical initial step in the software development lifecycle, serving as a
foundation for informed project decisions. This phase aims to assess the viability, practicality, and
overall feasibility of a proposed software project. It involves a thorough evaluation of various factors,
including technical, economic, and operational aspects.
In a real-life software development project using iterative waterfall SDLC, it is a practical necessity
that the different phases overlap to some extent. This is because the strict sequential nature of the
traditional waterfall model can be too rigid and inflexible for real-world projects. In practice, there is
often feedback from later phases that may require changes to earlier phases. For example, during
testing, it may be discovered that there are bugs in the code that require changes to the design or
requirements.
By allowing some overlap between phases, the project team can be more flexible and adaptable to
changing requirements. This can help to reduce the risk of problems later in the project and improve
the overall quality of the software.
Here is a typical effort distribution over different phase in an iterative waterfall SDLC project:
• Planning: 10-20%
• Analysis: 20-30%
• Design: 20-30%
• Implementation: 20-30%
• Testing: 10-20%
• Deployment: 5-10%
As you can see, the effort is spread evenly across all of the phases. However, there may be some
variation depending on the specific project. For example, if the project is very complex, then the
analysis and design phases may require more effort.
Here are some specific examples of how phase overlap can be beneficial in an iterative waterfall
SDLC project:
• Requirements may evolve over time. As users get a better understanding of the system, they
may have new requirements or may need to change existing requirements. This feedback can
be incorporated into the design and implementation phases.
• Design decisions may need to be revisited. As the system is implemented, it may become
clear that some of the design decisions were not optimal. This feedback can be used to
improve the design of the system.
• Bugs may be discovered in testing. Bugs can be fixed during the implementation phase, or
they may require changes to the design or requirements. This feedback can help to prevent
problems from being released to production.
Overall, phase overlap is a valuable tool for managing risk and improving the quality of software in
an iterative waterfall SDLC project. By allowing some flexibility in the process, project teams can
better respond to changing requirements and deliver a higher-quality product.
130. Suppose a software has five different configuration variables that are set independently. If
three of them are binary (have two possible values), and the rest have three values, how many test
cases will be needed if pairwise testing method is used?
To determine the number of test cases required using pairwise testing, consider the following
steps:
1. Identify the number of binary variables: In this case, three of the five variables are
binary, meaning they have two possible values.
2. Calculate the total number of combinations for binary variables: Since each binary
variable has two values, the total number of combinations for three binary variables is
2^3 = 8.
3. Determine the number of values for non-binary variables: The remaining two
variables have three possible values each.
4. Calculate the total number of combinations for non-binary variables: For two non-
binary variables with three values each, the total number of combinations is 3^2 = 9.
5. Multiply the combinations for both types of variables: To achieve complete pairwise
coverage, multiply the number of combinations for binary variables by the number of
combinations for non-binary variables: 8 (binary combinations) * 9 (non-binary
combinations) = 72.
Therefore, using pairwise testing, a total of 72 test cases would be required to
comprehensively cover all possible interactions between the five configuration variables.
131. Suppose you are the project manager of a large development project. The top
management informs that you would have to do with a fixed team size (i.e., constant
number of developers) throughout the duration your project as compared to a project in
which you can vary the number of personnel as required. What will be the impact of this
decision on your project?
Having a fixed team size throughout the duration of a large development project can have several
significant impacts on the project's progress, resource allocation, and overall success. While it may
seem like a straightforward constraint, it can introduce challenges and necessitate careful planning to
ensure project completion.
Potential Impacts of a Fixed Team Size:
1. Reduced Flexibility: With a fixed team size, the project manager's ability to adapt to changing
project requirements, workload fluctuations, or unforeseen circumstances is limited. This lack
of flexibility can lead to delays, increased effort, or the need to compromise on project scope
or quality.
2. Resource Allocation Challenges: Assigning tasks and managing workloads becomes more
complex with a fixed team size. The project manager may need to make difficult decisions
about prioritizing tasks, balancing workloads, or even reskilling team members to
accommodate changing requirements.
3. Potential for Overburden or Underutilization: A fixed team size may lead to situations where
team members are either overburdened with work or underutilized due to the mismatch
between their skills and task requirements. This can affect team morale, productivity, and
overall project efficiency.
4. Limited Ability to Respond to Risks: With a fixed team size, the project manager's ability to
respond to identified risks or unexpected challenges is limited. This can increase the
likelihood of project delays, cost overruns, or even project failure.
5. Potential for Impacted Project Quality: The inability to adjust team size based on project
demands can put strain on team members, leading to potential reductions in quality control,
testing, or overall project polish.
Strategies to Mitigate the Impacts of a Fixed Team Size:
1. Thorough Project Planning: Detailed upfront planning is crucial to identify resource
requirements, task dependencies, and potential risks. This planning can help in allocating
resources efficiently and anticipating potential challenges.
2. Effective Task Management: Implement clear task management processes to prioritize tasks,
track progress, and identify areas where workload adjustments may be needed. Utilize tools
like project management software to visualize task dependencies and resource utilization.
3. Cross-Skilling and Collaboration: Encourage cross-skilling within the team to increase
flexibility and adaptability. Foster a collaborative environment where team members can
share knowledge, support each other, and adapt to changing priorities.
4. Regular Risk Assessment: Conduct regular risk assessments to identify potential challenges
and develop mitigation plans. Proactive risk management can help prevent issues from
escalating and impacting project progress.
5. Clear Communication and Stakeholder Management: Maintain open communication with
stakeholders to keep them informed of any potential impacts of the fixed team size constraint.
Manage expectations and ensure stakeholders understand the implications of this decision on
the project's timeline and scope.
132. Suppose you make a detailed schedule for your project whose effort and schedule estimates
for various milestones have been done in the project plan. How will you check if the detailed
schedule is consistent with the plan? What will you do if it is not?
Here are the steps on how to check if the detailed schedule is consistent with the plan
1. Review the detailed schedule. Carefully review the detailed schedule to ensure that all tasks are
clearly defined, have realistic time estimates, and are assigned to the appropriate resources.
2. Compare the detailed schedule to the plan. Compare the detailed schedule to the plan to ensure that
the overall scope of the project is still being addressed and that the milestones are still on track to be
met.
3. Check for resource conflicts. Identify any potential resource conflicts, such as where two tasks
require the same resource at the same time. These conflicts will need to be resolved before the schedule
can be considered consistent with the plan.
4. Calculate the total effort for the detailed schedule. Calculate the total effort required to complete the
detailed schedule and compare it to the effort estimates in the plan. If the total effort is significantly
higher than the original estimates, then the detailed schedule may not be consistent with the plan.
5. Review the detailed schedule with stakeholders. Get feedback from stakeholders on the detailed
schedule to ensure that it meets their expectations and that it is aligned with the overall project goals.
If the detailed schedule is not consistent with the plan, then the following steps should be taken:
1. Identify the areas of inconsistency. Identify the specific areas where the detailed schedule deviates
from the plan.
2. Analyse the causes of the inconsistency. Analyse the causes of the inconsistency to determine
whether they are due to changes in the project scope, underestimation of effort, or unrealistic time
estimates.
3. Adjust the detailed schedule. Adjust the detailed schedule to address the areas of inconsistency. This
may involve revising task estimates, reassigning tasks, or adding new tasks to the schedule.
4. Update the plan. Update the plan to reflect the changes made to the detailed schedule.
5. Recommunicate the plan to stakeholders. Recommunicate the updated plan to stakeholders to ensure
that everyone is on the same page.
133. Consider a program containing many modules. If a global variable x must be used to share
data between two modules A and B, how would you design the interfaces of these modules to
minimize coupling?
To minimize coupling between modules A and B when sharing a global variable x, consider the
following strategies:
1. Encapsulation: Encapsulate the global variable x within a separate module, such as module
C, and provide accessor methods to retrieve and modify the value of x. This confines the
direct access to x within module C, reducing the visibility of x to other modules.
2. Parameter Passing: Instead of directly accessing the global variable x, pass it as a parameter
to the methods of modules A and B that require it. This decouples the modules from the
global variable, making it easier to test and maintain the code.
3. Data Abstraction: Introduce an abstract data type (ADT) to represent the data shared
between modules A and B. The ADT should encapsulate the data and provide methods to
access and manipulate it. This promotes information hiding and reduces the dependency of
modules A and B on the specific implementation of the shared data.
4. Mediators: Consider using a mediator pattern to manage the interaction between modules A
and B. The mediator acts as a central point of communication, handling requests for data
access and modification. This further decouples the modules and simplifies the interaction
between them.
5. Event-driven Architecture: Implement an event-driven architecture where modules A and B
communicate by publishing and subscribing to events related to the shared data. This
eliminates the need for direct dependencies between the modules and promotes a loosely
coupled design.
By employing these strategies, you can minimize coupling between modules A and B, making the
program more modular, maintainable, and testable.
134. Which of the four organizational paradigms for teams do you think would be most effective
(a) for the IT department at a major insurance company; (b) for a software engineering group at a
major defense contractor; (c) for a software group that builds computer games; (d) for a major
software company? Explain why you made the choices you did.
(a) IT department at a major insurance company: Closed paradigm The IT department at a
major insurance company would benefit most from the closed paradigm. This is because
insurance companies are highly regulated and require strict adherence to policies and procedures.
The closed paradigm provides a clear hierarchy of authority and ensures that all work is done in a
consistent and controlled manner.
(b) Software engineering group at a major defense contractor: Random paradigm A software
engineering group at a major defense contractor would be most effective under the random
paradigm. This is because defense contractors need to be able to innovate quickly and adapt to
changing requirements. The random paradigm allows for more flexibility and creativity, which
are essential for this type of work.
(c) Software group that builds computer games: Open paradigm A software group that builds
computer games would be best suited to the open paradigm. This is because the game industry is
highly competitive and requires teams to be able to come up with new and innovative ideas. The
open paradigm encourages collaboration and communication, which are essential for developing
successful games.
(d) Major software company: Synchronous paradigm A major software company would be most
effective under the synchronous paradigm. This is because software companies need to be able to
coordinate the work of large teams of people. The synchronous paradigm provides a framework
for communication and collaboration, which are essential for managing complex projects.
The choice:
a)
It should be The IT department at a major insurance company. The IT department will handle all the
work related to storing and processing policies in the insurance company. The flow of online premium
payment, policy advice can give through software developed by IT department. It can play a major
role in building the companies policies and brand value and automation of the work.
135. Is it possible to begin coding immediately after a requirements model has been created?
Explain your answer and then argue the counterpoint.
Yes, it is possible to begin coding immediately after a requirements model has been created. The
requirements model provides a blueprint or a roadmap for the coding process. It outlines the
functionalities, inputs, and outputs of the system, which helps in understanding the requirements and
the structure of the code.
Here are some of the benefits of starting coding after the analysis model is created:
• It allows the development team to have a clear understanding of what needs to be implemented and
how it should work. This reduces the chances of misinterpretation and ensures that the code aligns with
the intended system design.
• Coding right after the analysis model can expedite the development process. Since the analysis model
has already identified the system's requirements, the team can focus on writing the code without
wasting time on unnecessary iterations or revisions.
However, there are also some counterpoints to consider. One is that the analysis model may not be
complete or accurate. If this is the case, then coding too early may lead to problems down the
road. Additionally, the analysis model may not be the best way to represent the system's
requirements. In some cases, it may be better to use a different modelling technique, such as a use
case diagram or a state machine diagram.
136. Besides counting errors and defects, are there other countable characteristics of software that
imply quality? What are they and can they be measured directly?
Yes, there are many other countable characteristics of software that imply quality besides counting errors and
defects. These characteristics can be broadly categorized into functional and non-functional qualities.
Functional qualities relate to the ability of software to meet its specified requirements. Some examples of
functional qualities include:
• Correctness: Software should produce correct results for all valid inputs.
• Reliability: Software should perform consistently and without failures over time.
• Usability: Software should be easy to learn, use, and understand.
• Performance: Software should execute efficiently and use resources optimally.
• Security: Software should protect sensitive data and systems from unauthorized access.
Non-functional qualities relate to the overall characteristics of software that affect its usability, maintainability,
and overall quality. Some examples of non-functional qualities include:
• Maintainability: Software should be easy to modify, extend, and debug.
• Testability: Software should be designed to facilitate testing and defect detection.
• Portability: Software should be easy to adapt to different hardware and software environments.
• Reusability: Software components should be modular and reusable in different applications.
• Interoperability: Software should be able to interact and exchange data with other systems.
These qualities can be measured directly using a variety of techniques, such as:
• Static analysis: This involves examining the source code of software to identify potential defects or
violations of coding standards.
• Dynamic testing: This involves executing software and observing its behavior to identify defects or
performance bottlenecks.
• User testing: This involves observing users interact with software to identify usability issues.
• Performance testing: This involves measuring the response time, resource consumption, and scalability
of software under load.
• Security testing: This involves attempting to exploit vulnerabilities in software to identify and
remediate security flaws.
By measuring these characteristics, software engineers can gain valuable insights into the overall quality of their
software and identify areas for improvement.
137. You have been appointed a project manager for a major software products company. Your job
is to manage the development of the next-generation version of its widely used word-processing
software. Because competition is intense, tight deadlines have been established and announced.
What team structure would you choose and why? What software process model(s) would you
choose and why?
Team Structure
Given the tight deadlines and the need for high-quality work, I would choose a cross-functional team structure
for the development of the next-generation version of the word-processing software. A cross-functional team is
a team that is made up of members with different skills and expertise. This type of team is well-suited for
complex projects because it allows for a more efficient division of labor and a more effective flow of
information.
In this case, the cross-functional team would include members with expertise in the following areas:
• Software development
• User interface (UI) design
• Quality assurance (QA)
• Documentation
This team would be responsible for all aspects of the project, from requirements gathering to deployment.
Software Process Model
Given the tight deadlines, I would choose a hybrid software process model that combines elements of the Agile
and Waterfall models. The Agile model is a flexible iterative model that emphasizes user feedback and rapid
prototyping. The Waterfall model is a more structured sequential model that emphasizes planning and
documentation.
A hybrid model would allow the team to take advantage of the benefits of both Agile and Waterfall. The Agile
approach would allow the team to respond quickly to changes in requirements and to iterate on the design of the
software. The Waterfall approach would provide the team with a clear roadmap for the project and would help
to ensure that the project is completed on time and within budget.
Specifically, I would use the following hybrid model:
1. Planning phase: Use Agile methodologies to gather requirements, define user stories, and create a high-
level project plan.
2. Design phase: Use Waterfall methodologies to create detailed design documents, including user
interface (UI) mock-ups, software architecture diagrams, and database schemas.
3. Development phase: Use Agile methodologies to develop and test the software in an iterative fashion.
4. Testing phase: Use Waterfall methodologies to perform comprehensive system testing and quality
assurance (QA) testing.
5. Deployment phase: Use Waterfall methodologies to deploy the software to production and provide user
support.
This hybrid model would provide the team with the flexibility and agility they need to meet the tight deadlines
while also ensuring that the project is completed on time, within budget, and with high quality.
138. If an architecture of the proposed system has been designed specifying the major components
in the system, and you have source code of similar components available in your organization’s
repository, which method will you use for the effort estimation? Explain your answer.
In this situation, you can use the analogy method for effort estimation. This method involves estimating the
effort for each component of the proposed system by analogy to similar components in your organization’s
repository.
To use the analogy method, you will need to do the following:
1. Identify similar components: For each component in the proposed system, identify one or more similar
components in your organization’s repository.
2. Assess similarity: Assess the similarity between each proposed component and its corresponding
similar components. This can be done by considering factors such as the size, complexity, and
functionality of the components.
3. Estimate effort: For each proposed component, estimate the effort based on the effort of its
corresponding similar components. This can be done by multiplying the effort of the similar component
by a similarity factor. The similarity factor should be a number between 0 and 1, where 1 indicates that
the components are identical and 0 indicates that the components are not similar at all.
The analogy method is a relatively simple and intuitive method for effort estimation. However, it is important to
note that the accuracy of the method will depend on the quality of the similarity assessments.
Here is an example of how to use the analogy method to estimate the effort for a proposed system:
In this example, the total estimated effort for the proposed system is 25.
The analogy method is a useful tool for effort estimation when you have a good understanding of the proposed
system and you have access to a repository of similar components. However, it is important to use the method
with caution and to be aware of its limitations.
139. Describe a process framework in your own words. When we say that framework activities are
applicable to all projects, does this mean that the same work tasks are applied for all projects,
regardless of size and complexity? Explain
A process framework is a structured and standardized approach to managing and executing projects. It
provides a common set of guidelines, principles, and templates that can be applied to projects of all
sizes and complexities. The goal of a process framework is to improve project outcomes by promoting
consistency, efficiency, and effectiveness.
While a process framework provides a common foundation for project management, it does not mean
that the same work tasks are applied to all projects regardless of size and complexity. The specific
activities and deliverables will vary depending on the project's unique characteristics, such as its
scope, objectives, risks, and stakeholders.
Here is an analogy to illustrate this concept: Imagine building a house. While there is a general
framework for constructing a house (e.g., laying the foundation, framing the walls, roofing), the
specific materials, techniques, and time required will vary depending on the size, style, and
complexity of the house. Similarly, a process framework provides a general roadmap for project
management, but the specific tasks and deliverables will vary depending on the project's unique
characteristics.
The key takeaway is that a process framework is a flexible tool that can be adapted to fit the specific
needs of each project. It provides a common foundation for project management but does not dictate a
rigid set of rules that must be followed for all projects.
140. The concept of “antibugging” is an extremely effective way to provide built-in debugging
assistance when an error is uncovered:
c. Discuss disadvantages
A. Developing a Set of Guidelines for Antibugging
Antibugging, also known as defect prevention, is a crucial aspect of software development that aims to
minimize the introduction of defects in the first place. By proactively addressing potential issues during the
design and coding phases, antibugging techniques can significantly reduce the time and effort required for
debugging later in the development process.
Here is a comprehensive set of guidelines for implementing antibugging practices:
1. Thorough Requirements Analysis:
o Clearly define and understand the functional and non-functional requirements of the software.
o Identify potential ambiguities, inconsistencies, or missing requirements early on.
o Validate requirements with stakeholders through prototyping, user stories, or mock-ups.
2. Design for Simplicity:
o Favor simpler algorithms and data structures over complex ones.
o Minimize the number of conditional statements and nested loops.
o Modularize the code to enhance maintainability and reduce complexity.
3. Defensive Coding:
o Validate user inputs to prevent invalid data from entering the system.
o Handle error conditions gracefully and prevent crashes or unexpected behaviour.
o Use assertions to check for invariants and assumptions throughout the code.
4. Code Reviews and Static Analysis:
o Implement regular code reviews to identify potential defects and improve code quality.
o Utilize static analysis tools to detect potential bugs, coding violations, and memory leaks.
o Encourage open communication and collaboration during code reviews.
5. Unit Testing:
o Develop comprehensive unit tests for each module or function in the code.
o Test for expected and unexpected inputs, including edge cases and boundary conditions.
o Automate unit tests to ensure consistent and reliable testing.
6. Continuous Integration and Delivery:
o Implement a continuous integration (CI) pipeline to automate code builds and testing.
o Integrate unit tests into the CI pipeline to catch defects early and prevent them from merging
into the main codebase.
o Employ continuous delivery (CD) to automate the deployment of bug-free code to production
environments.
B. Advantages of Using Antibugging
Antibugging offers several compelling advantages over traditional debugging approaches:
1. Reduced Debugging Effort: Antibugging techniques minimize the number of defects introduced,
leading to a significant reduction in debugging time and effort.
2. Improved Software Quality: By preventing defects from entering the codebase, antibugging enhances
overall software quality and reliability.
3. Reduced Development Costs: By minimizing the time spent on debugging, antibugging practices lower
development costs and improve project efficiency.
4. Enhanced Maintainability: Antibugged code is generally easier to understand, maintain, and modify
due to its simpler structure and reduced error-proneness.
5. Improved User Experience: Antibugged software provides a smoother and more reliable user
experience, minimizing frustration and enhancing user satisfaction.
C. Disadvantages of Antibugging
While antibugging offers significant benefits, it also has some potential drawbacks:
1. Upfront Investment: Implementing antibugging practices may require an initial investment in training,
tools, and processes.
2. Increased Development Time: Thorough requirements analysis, code reviews, and unit testing can
potentially extend the initial development phase.
3. Potential for Over-Engineering: Overzealous focus on antibugging could lead to overly complex or
unnecessary design decisions.
4. Risk of Overlooking Defects: No antibugging technique is foolproof, and some defects may still slip
through.
5. Limited Applicability: Antibugging may not be equally applicable to all types of software projects,
especially those with tight deadlines or limited resources.
141. Give at least three examples in which black-box testing might give the impression that “everything’s OK,”
while white-box tests might uncover an error. Give at least three examples in which white-box testing might
give the impression that “everything’s OK,” while black-box tests might uncover an error
Sure, here are three examples in which black-box testing might give the impression that "everything's OK,"
while white-box testing might uncover an error:
1. Input Validation: Black-box testing may not thoroughly test all possible input combinations, especially
those that lie outside the expected range. For instance, a function that takes a numeric input might not
handle invalid inputs like strings or symbols, leading to unexpected behaviour or crashes. White-box
testing, on the other hand, can explicitly test for these edge cases and ensure that the function behaves
correctly under all input conditions.
2. Error Handling: Black-box testing may not uncover subtle error conditions that only arise under
specific circumstances. For example, a program might handle certain error scenarios gracefully but fail
to handle others, leading to silent failures or data corruption. White-box testing can thoroughly
examine the error handling mechanisms and ensure that all potential errors are properly handled and
reported.
3. Performance Bottlenecks: Black-box testing may not identify performance issues that only arise under
heavy load or specific usage patterns. For instance, a program might perform adequately under normal
usage but experience significant slowdowns or memory leaks under high concurrency or specific input
sequences. White-box testing can analyse the code's efficiency and identify potential bottlenecks that
could impact performance under real-world conditions.
Conversely, here are three examples in which white-box testing might give the impression that "everything's
OK,"
while black-box tests might uncover an error:
1. Assumption Violations: White-box testing may overlook implicit assumptions made in the code that are not
explicitly documented or tested. For example, a function might assume that a certain input parameter is always
non-null, but if this assumption is violated, the function could lead to unexpected behaviour. Black-box testing,
by providing a wider range of input scenarios, can help identify these assumption violations and ensure that the
code is robust to unexpected conditions.
2. User Interface Issues: White-box testing may focus on the internal functionality of the code and
overlook usability issues that only become apparent during user interaction. For instance, a program
might function correctly according to the code but have an unintuitive user interface, leading to user
frustration and errors. Black-box testing, by involving actual users, can identify these usability issues
and ensure that the program is user-friendly and easy to navigate.
3. Integration Errors: White-box testing may not uncover integration issues that arise when the program
interacts with other systems or external components. For example, a program might function correctly
in isolation but fail to communicate properly with a database or external API. Black-box testing can
simulate these interactions and identify integration errors that could affect the overall functionality of
the system.
142. What do you understand by sliding window planning? Explain using a few examples the
types of projects for which this form of planning is especially suitable. Is sliding window planning
appropriate for small projects? What are its advantages over conventional planning?
Sliding window planning is an iterative project planning approach that divides a large project into a
series of smaller, more manageable phases. Each phase is planned in detail, and the plan for the next
phase is developed as the current phase is being executed. This allows for ongoing adaptation to
changing requirements and unforeseen challenges.
Sliding window planning is particularly well-suited for projects that:
• Are large and complex, making it difficult to plan the entire project upfront.
• Have uncertain or evolving requirements, which may necessitate changes to the project plan
as work progresses.
• Have long development timelines, making it impractical to plan the entire project in detail
from the start.
Examples of projects where sliding window planning can be effective include:
• Software development: In software development, sliding window planning allows developers
to focus on completing specific modules or features before moving on to the next, enabling
flexibility in adapting to changing requirements and evolving technologies.
• Construction: In construction projects, sliding window planning helps break down the project
into manageable phases, such as foundation, framing, roofing, and interior finishes. This
phased approach allows for adjustments to the plan as construction progresses and unforeseen
conditions arise.
• New product development: In new product development, sliding window planning enables
iterative cycles of design, prototyping, testing, and refinement, allowing the product to be
continuously improved based on feedback and market insights.
Sliding window planning can be beneficial for small projects as well, especially those with some
degree of uncertainty or complexity. By breaking down the project into smaller phases, even small
projects can benefit from the iterative approach and the ability to adapt to changes as they occur.
Advantages of sliding window planning over conventional planning include:
1. Flexibility: Sliding window planning allows for ongoing adaptation to changing requirements
and unforeseen challenges, making it more suitable for projects with evolving needs.
2. Reduced Risk: By focusing on smaller, more manageable phases, sliding window planning
reduces the risk of major setbacks or costly mistakes.
3. Improved Communication: The iterative nature of sliding window planning fosters better
communication and collaboration among stakeholders, ensuring that everyone is aligned with
the project's progress and goals.
4. Early Feedback: Sliding window planning provides opportunities for early feedback and
validation, allowing for course correction and improvements throughout the project lifecycle.
5. Reduced Overcommitment: Sliding window planning prevents over-commitment to long-term
plans that may not be feasible or adaptable to changing circumstances.