Project context is central to successful performance testing.
Business, project, system, & user success criteria.
Identify system usage, and key metrics; plan & design tests.
Install & prepare environments, tools, & resource monitors.
Script the performance tests as designed.
Run and monitor tests. Validate tests, test data, and results.
Analyze the data individually and as a cross-functional team.
Consolidate and share results, customized by audience.
• Introduction
Performance Testing, Process, and Tools
Performance Evaluation and Models
Performance Evaluation and Metrics
Performance Testing
Performance testing – refers to test activities on
checking system performance
The major objectives of performance testing:
To confirm and validate the specified system
performance requirements.
To check the current product capacity to answer the
questions from customers and marketing people.
To identify performance issues and performance
degradation in a given system
Performance Testing - Focuses
System process speed (Max./Min./Average)
o system processes, tasks, transactions, responses.
o data retrieval, data loading
System throughput (Max./Min./Average)
o loads, messages, tasks, processes
System latency (Max./Min./Average)
o Message/event, task/process
System utilization (Max./Min./Average)
o Network, server/client machines.
System availability (component-level/system-level)
o component/system, services/functions
o system network, computer hardware/software
Performance Testing - Focuses
System reliability (component-level/system-level)
o component/system, services/functions
o system network, computer hardware/software
System scalability (component-level/system-level)
o load/speed/throughput boundary
o improvements on process speed, throughput
System successes/failures rates for
o communications, transactions, connections
o call processing, recovery, …
Domain-specific/application-specific
o agent performance
o real-time report generation speed
o workflow performance
Performance Testing - Test Process
Understand system and identify performance requirements
Identify performance test objectives and focuses
Define performance test strategy:
o Define/select performance evaluation models
o Define/select performance test criteria
o Define and identify performance test metrics.
Identify the needs of performance test tools and define
performance test environment
Write performance test plan
Develop performance test tools and support environment
Set up the target system and performance test beds
Design performance test cases and test suite
Performance test execution and data collection
Performance analysis and reporting
Performance Evaluation
What is performance evaluation?
Using a well-defined approach to study, analyze, and measure the
performance of a given system.
The basic tasks and scope:
Collect system performance data
Define system performance metrics
Model system performance
Measure, analyze, estimate system performance
Present and report system performance
Performance Evaluation
– Objectives and Needs
The major objectives:
Understand product capacity
Discover system performance issues
Measure and evaluate system performance
Estimate and predict system performance
The basic needs are:
Well-defined performance metrics
Well-defined performance evaluation models
Performance evaluation tools and supporting environment
Performance Evaluation - Approaches
Performance testing: (during production)
o measure and analyze the system performance based on
performance test data and results
Performance simulation: (pre-production)
o study and estimate system performance using a simulation
approach
Performance measurement at the customer site:
(post-production)
o measure and evaluation system performance during system
operations
Load Testing
• Load testing is a simulation of multiple virtual
users working with a web application at the
same time.
• Load testing can be performed for the number
of reasons but the main goal is always to
check performance of application being
tested.
Types of load testing
• Performance Testing
• Capacity Testing
• Stress Testing
• Volume Testing
• Endurance Testing
• Regression Testing
Performance Testing
• Load is gradually increased during test by adding
more and more concurrent virtual users.
• The following parameters should be monitored and
compared throughout different test phases
1. Web application response time
2. Number of HTTP requests or application specific transactions
processed per second
3. Percentage of failed requests
Capacity Testing
• Capacity tests are executed to find out how
many concurrent users the application can
handle without degradation of quality
• In this case Virtual users are added gradually
and we should know the quality criteria in
advance and just need to check that they are
observed.
Stress Testing
• Every System has a capacity limit. When the load
goes beyond it, the application starts responding
very slowly and produces errors.
• The goals of stress testing are:
1. Find the capacity limit
2. Check that when limit is reached the application handles the
stress correctly, produces graceful overload notifications and does
not crash.
3. When the load is reduces back to regular level, the application
should be able to return to normal operation retaining all its
performance characteristis.
Volume testing
• Volume tests are targeted at loading the
application with significant amount of data
and maximizing the complexity of each
transaction.
• For examples: If application is using uploading
files, try using very large files. If it has search
function, try complex keyword combinations
and queries producing a very long list of
results.
Endurance Testing
• This type of testing also called soak testing. It is used to check
that system can stand a load for long time or a large number of
transactions.
• It is usually reveals various types of resource allocation
problems. For example a small memory leak will not be
evident from a quick test every with high load.
In computer science, a memory leak occurs when a computer program incorrectly
manages memory allocations.[1] In object-oriented programming, a memory leak may
happen when an object is stored in memory but cannot be accessed by the running
code too much of the available memory may become allocated and all or part of the
system or device stops working correctly, the application fails, or the system slows
down unacceptably due to thrashing
• It should be changing periodic load and long duration.
Regression Testing
• It is to make the load testing a part of your
regular development process by creating
regression load tests and applying them to
every new version of your application.
Performance Test requirement analysis
• Gathering of requirements for performance comes in two categories. Firstly,
studying and understanding specifics of the workload, and secondly, gathering
performance targets. By workload we mean the load on the system in terms of
number of transactions that need to be simultaneously processed or the amount of
processing that is required to be done. Performance targets are for the basic
performance quantities of response time, throughput, and resource utilization.
• The study of the workload involves performing a usage profile of the system, or
more loosely studying the „load‟ for which the system needs to „work‟. Inputs are
to be collected for six areas:
1. Business transactions for online processing
2. Batch processing
3. Reports processing
4. Business data volumes
5. User interaction
6. External interfaces for the business applications
Workload Inputs to be gathered
1. The list of business components of the system
2. For each business component, the transaction processing workload (including queries):
a. The list of online processing transactions
b. The list of steps for each online transaction (called elementary transactions)
c. The ‘mix’ of transactions in terms of percentages and/or dependencies
d. Segregation of transactions by criticality
e. Separation into synchronous and asynchronous transactions
f. Derived transactions that emanate out of workflow requirements
g. The average transaction rates, as fine-grained as possible. For example, transactions per second, transactions per minute, transactions per hour,
transactions per day, transactions per week, etc.
h. The peak transaction rates
i. The estimated growth in transactions
j. The working hours of the business, and the periods of maximum transaction load per type of transaction
k. The number of web or client/server interactions per transaction
l. The transaction complexity in terms of:
i. Number of screen interactions
ii. Communication protocol of screen interactions (for example, HTTP, Oracle NCA, TCP/IP)
iii. Number of fields, list boxes per screen
iv. Amount of data fetched per screen, number of records
v. Number of records processed by the transaction
vi. Relative complexity definition (for example, transaction A is simple since it has two screen interactions with 3 fields each, transaction B is moderate since
it has 5 screen interactions with 2 to 5 fields each, and transaction C is complex since it has 10 screen interactions with 10 fields each)
vii. Amount of data (in bytes) sent and returned per screen
synchronous and asynchronous processing
• Synchronous processing involves a user waiting for transaction completion, in order for the
business processing to move forward. For example, while submitting an insurance claim, the
end user waits for the IT system to respond in order to get a confirmation of acceptance of
the claim. These are usually transactions that come under the realm of the front office of a
business.
• Asynchronous processing on the other hand relates to back office processing, where a
response is not immediately required. For example, once the claim submission is confirmed
the end user can be given an acknowledgement number for further inquiry. The end user
does not have to wait in front of a computer terminal until the claim is fully processed.
Therefore the business processing of the claim is asynchronous with respect to the end user
Workflow
• In a workflow system transactions get generated within the system as a result of a business transaction
initiated by an end user. For example, a customer wishes to withdraw a large amount of money from a
bank. This withdrawal transaction results in a workflow for approval by the bank manager.
• Consider this example of a real life workload. One of the largest banks in the world processes close to
20 million banking transactions per day (including inquiries).
Transaction mix for a very large bank
Transaction Type Percentage Occurrence
• Cash Withdrawal (ATM) 6.5%
• Cash Withdrawal 5.7%
• Passbook Line – No Update 4.9%
• Short Enquiry 4.8%
• Update Passbook 4.7%
• Cash Deposit 3.3%
• Credit Transaction 2.6%
• Deposit Transfer 2.2%
• Cheque Deposit 1.9%
• For the same bank, it was required to estimate the peak
transaction workload. The first step involves examining the
variations in number of transactions per day. It was observed
that the on the day with maximum load across the year, the
transactions are 30% more than average. After this step, one
needs to examine the peak transactions per hour within the
day. Data showed that 40% of transactions occur in peak
period of 3 hours. The next step is to look at the intra-hour
skew and narrow down to the peak requirement per minute
or per second. For example, in a very large stock exchange
25% of the daily transactions occur in a window of 15
minutes.
Workload Inputs to be gathered for Batch
Processing
1. List of business components that involve batch processing
2. For each business component, the batch processing workload:
a. List of batch programs
b. Mix of batch programs in terms of schedule and completion windows
c. Frequency of batch programs in terms of daily, weekly, monthly, and so on.
d. Complexity of batch programs in terms of number of records fetched and
processed
e. The concurrency of batch processing with online processing (overlapping
periods of time)
f. Schedule and duration of backup
•
Workload Gathering for Data Volumes
• Business data growth impacts performance significantly, both through increased storage costs and the
adverse effects of data volumes on the performance of transactions, reports, and batch programs. At
the requirements gathering stage one may not have data sizes and volumes at the database level or the
file level.
1. Business data model across all business components
2. The volumes of business and data entities:
a. List of entities (for example, customers, orders, policies, claims)
b. Data volume (number of records) per entity as of today
c. Data access patterns, if any
d. Growth rate of entities over one to five year period, preferably quarter on quarter growth
e. Data retention period in number of years
• Data access patterns are important to note because quite often it is observed that more than 80% of
data accessed is by less than 20% of the transactions or on less than 20% of the data items. For
example in a large brokerage 85% of the trades occur on less than 2% of the stocks. The last
requirement on data retention period 6 is important from a compliance perspective. Some businesses
need to keep data for 7 years, some for 25 years.
Workload Inputs to be gathered for Users
• As the business grows, so do the users of IT systems. Collecting data about users and
their usage profiles not only helps one understand the business workloads better, but
also provides a useful crosscheck against transaction and report workload data.
1. List of business processing departments or groups
2. For each department or group:
a. Classification of users (for example, e.g. business user, administrator, system manager,
control personnel)
b. Number of registered users per type of user
c. Number of concurrent users per type of user, as a function of time of day
d. The types of transactions that can be executed by each type of user
e. Growth rate of users, on an annual basis
f. The interactions between user and end customer, if any (for example, questions asked in
telephone call)
•
Workload Gathering for External Interfaces
• With expansion in many businesses, the workload does not emanate from
business users alone. The business usually creates channels for the external
world to interact with it, be it customers, or partners, or suppliers. For
example, in the banking world ATMs have become very common and all ATM
transactions come to a bank‟s IT systems via interface channels.
1. List of all business components that involve interfaces
2. For each business component:
a. List of interfaces that carry transactional workload
b. List of interfaces that carry batch workload
c. For transactional interfaces collect data
d. For batch interfaces collect data
Gathering Inputs on Performance Targets
• In addition to collecting inputs on workloads, the performance requirements gathering
phase also involves collecting inputs on performance targets. The performance metrics
for a system typically are:
• Response time per online interaction
• Completion time per online transaction
• Delivery time for asynchronous transaction
• Transaction throughput:
• Batch completion time:
• Report throughput and completion time
Resource consumption Requirement
1. CPU utilization
2. Memory consumption
3. Disk consumption
4. Network bandwidth
Methodology for Qualitative Performance
Requirements Analysis
• [Link] a thorough workload analysis, as described
• [Link] performance targets as discussed
• [Link] performance targets in light of technology being used such as dialup lines, low end PCs as desktops, etc.
• [Link] transactions/interactions that result in a redundant display of information.
• Evaluate filtering alternatives for the same.
• [Link] transactions, reports, batch jobs that can cause excessive consumption of system resources. For these evaluate
options for consumption reduction:
[Link] per use or charge back
[Link] fancy features
[Link] categories and providing better performance to those with higher priority
[Link] requirements that will limit resource consumption
[Link] transactions, reports, batch jobs that require workload management, and arrive a suitable workload management
policy such as:
• [Link] control
• [Link]
• [Link] transactions, reports, and batch jobs that can potentially interfere with each other’s response. Transactions that are
less frequent but can cause significant degradation to very frequent transactions are candidates that require a change in
business processing.
• [Link] adding requirements for tracking performance of specific classes of transactions or for all types of transactions, so
that it is easy to resolve performance issues in production, in particular, when there are multiple vendors.
Transaction Methodology
• Transaction frequency
• Transaction complexity in terms of resource
consumption estimates
• Transaction workload in terms of bursty or
streamlined
• Transaction interference
Performance Requirements Analysis Matrix
Frequen Complex Worklo Interferenc Recommendation
cy ity ad e
Low Low - Yes Evaluate options for change
in business processing
Low High - Yes Evaluate consumption
reduction options such as
pay per use, charge back,
limits to consumption,
prioritization
High Low Low Bursty Evaluate workload
management such as flow
control, scheduling
High High High Streamlinin Evaluate consumption
g reduction options
High High High Bursty Evaluate consumption
reduction and workload
management options
Contact Team for Requirement
• To execute a thorough requirements analysis
study for performance it is necessary to have a
team with the following composition:
• Business analysts
• Application architects
• Performance engineers/Technical Architects
• System administrators (for analysing systems
already in production)
Quantitative Performance Requirements Analysis
• Deriving performance targets
• Validating performance targets
• Analysing interference across multiple types of transactions because of
targets specified
• Performance targets that are derived due to relationships between
transactions, batch jobs, and reports can be estimated based on
business workload models. However, very often customers provide
number of concurrent users and response time targets, but not
throughput targets. Or at times the throughput targets and number of
concurrent users are given but not cycle time targets. Or at times
throughput and cycle time targets are given but not number of
concurrent users.
Deriving Performance Targets
• N=XC
• where N is the (average) number of concurrent users, X
is the system throughput, and C is the average system
cycle time, which is the sum of average response time R
and average think time Z. Thus for example, if a system
has 3,000 concurrent users and a throughput target of
100 transactions per second, the average cycle time per
transaction is 3,000 / 100 = 30 seconds. If the think time
per transaction is 25 seconds, then the average system
response time target is 30 – 25 = 5 seconds.
V Model
OSI Reference Model
Common Protocols and
Group # Layer Name Key Responsibilities Data Type Handled Scope
Technologies
Encoding and Signaling;
Physical Data (Physical layers of most of
Electrical or light signals
1 Physical Transmission; Hardware Bits the technologies listed for
sent between local devices
Specifications; Topology the data link layer)
and Design
Logical Link Control; Media IEEE 802.2 LLC, Ethernet
Access Control; Data Family; Token Ring; FDDI
Framing; Addressing; Error Low-level data messages and CDDI; IEEE 802.11
2 Data Link Frames
Detection and Handling; between local devices (WLAN, Wi-Fi); HomePNA;
Defining Requirements of HomeRF; ATM; SLIP and
Physical Layer PPP
Lower Layers Logical Addressing;
IP; IPv6; IP NAT; IPsec;
Routing; Datagram
Mobile IP; ICMP; IPX;
Encapsulation; Messages between local or
3 Network Datagrams / Packets DLC; PLP; Routing
Fragmentation and remote devices
protocols such as RIP and
Reassembly; Error
BGP
Handling and Diagnostics
Process-Level Addressing;
Multiplexing/Demultiplexing
; Connections;
Segmentation and Communication between TCP and UDP; SPX;
4 Transport Datagrams / Segments
Reassembly; software processes NetBEUI/NBF
Acknowledgments and
Retransmissions;
Flow Control
Session Establishment,
Sessions between local or NetBIOS, Sockets, Named
5 Session Management and Sessions
remote devices Pipes, RPC
Termination
Data Translation;
Application data SSL; Shells and
6 Presentation Compression and Encoded User Data
representations Redirectors; MIME
Upper Layers Encryption
DNS; NFS; BOOTP;
DHCP; SNMP; RMON;
7 Application User Application Services User Data Application data
FTP; TFTP; SMTP; POP3;
IMAP; NNTP; HTTP; Telnet