0% found this document useful (0 votes)
143 views17 pages

Assila Etal 2016 Standardized Usability Questionnaires

Questionnaires for UX research

Uploaded by

tpchao11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
143 views17 pages

Assila Etal 2016 Standardized Usability Questionnaires

Questionnaires for UX research

Uploaded by

tpchao11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No.

1, 2016

Standardized Usability Questionnaires: Features and Quality Focus

Ahlem Assila1, 2, Káthia Marçal de Oliveira1, Houcine Ezzedine1


1
LAMIH UMR CNRS 8201F-59313, University of Valenciennes, France
2 SETIT, University of Sfax, BP 1175, 3038, Tunisia

{Ahlem.Assila, Kathia.Oliveira, Houcine.Ezzedine}@univ-valenciennes.fr

Abstract – For the last few decades more than twenty information that support them in selecting the appropriate
standardized usability questionnaires for evaluating software tool according to their requirements.
systems have been proposed. These instruments have been The remainder of this paper is structured as follows. In
widely used in the assessment of usability of user interfaces. section 2, we present briefly the fundamental usability
They have their own characteristics, can be generic or address concepts. Then in section 3, we review the most validated
specific kinds of systems and can be composed of one or several standardized usability questionnaires used for assessing
items. Some comparison or comparative studies were also user interfaces based on the literature. In section 4, we
conducted to identify the best one in different situations. All
describe our analysis of the questionnaire items based on
these issues should be considered while choosing a
common standard usability criteria. Subsequently, we
questionnaire. In this paper, we present an extensive review of
these questionnaires considering their key features, some
present a discussion. Finally, we provide a conclusion and
classifications and main comparison studies already performed. we draw some perspectives in section 5.
Moreover, we present the result of a detailed analysis of all
items being evaluated in each questionnaire to indicate those that
can identify users’ perceptions about specific usability problems.
II. USABILITY EVALUATION
This analysis was performed by confronting each questionnaire Usability evaluation has been well-defined and well-
item (around 475 items) with usability criteria proposed by studied. Preece et al. indicated that usability is a basic
quality standards (ISO 9421-11 and ISO/WD 9241-112) and concept in HCI and its main purpose is to make systems
classical quality ergonomic criteria. easy to use and learn [9]. Over the last few decades,
several usability definitions concerning specific criteria
Keywords – Human-Computer Interaction; user interfaces; have been published in the HCI literature [10]. According
evaluation; usability; Standardized questionnaire. to Shackel usability is “the capability to be used by
humans easily and effectively” and associated with five
criteria, i.e. effectiveness, learnability, retention, error and
I. INTRODUCTION attitude [12]. Another significant definition is given by
It is common sense that usability evaluation has a great Schneiderman [13] who defined usability as “a relation of
importance on Human-Computer Interaction (HCI). When effectiveness and efficiency of user interface and user’s
talking about the usability evaluation, we address the reaction to that interface” [13].
proposed methods and models of the evaluation. A similar usability definition, which differs only in
Considering the large number of usability evaluation terminology, is stated by Nielsen [14] and includes five
methods, standardized usability questionnaires are criteria, i.e. efficiency, learnability, memorability,
valuable tools intended for the assessment of perceived errors/safety and satisfaction [15]. Other than these
usability [1]. By gathering user perceptions about user definitions, several lists of design principles, heuristics,
interfaces, questionnaires can help to identify usability ergonomic rules and measures for quality criteria have
flaws for making improvements and measure user been proposed [10]. These studies aimed to provide the
satisfaction [3]. In the literature, various standardized necessary guidelines and measures to be used for
usability questionnaires have been proposed (see [3]). To evaluating user interfaces and identifying usability
choose the best one for each situation, it is important to problems.
know information about their key features, composed Several international standards have also stated
items, studies and classifications already performed. We usability definitions [19]. The ISO 9241-11 defined it as
argue also that despite the fact the questionnaire is usually “the extent to which a product can be used by specified
defined to address general issues (usability and users to achieve specified goals with effectiveness,
usefulness), it is also relevant to identify which specific efficiency and satisfaction in a specified context of use”
issue they can capture about the user interface. [19]. This definition associated three criteria
In light of this, we present in this paper a review of 24 (effectiveness, efficiency and satisfaction). More recently,
standardized usability questionnaires by summarizing their the ISO/IEC1 25010 [21] known as the SQUARE standard
key features, the classifications and main comparison (Systems and Software Quality Requirements and
studies already performed. We then emphasize a review of Evaluation) has included the ISO 9241-11 usability issues
the questionnaires according to specific related usability into a model characterized by five criteria, i.e.
criteria. To that end, an analysis of all the items was effectiveness, efficiency, satisfaction, freedom from risk
performed against each usability criteria proposed from the and context coverage. In turn, these criteria have
best known quality standards (ISO 9241-11 and ISO/WD separated into sub-criteria; for example the satisfaction
9241-112) and classical quality ergonomic criteria. Our
goal is to provide practitioners and HCI researchers useful
1
IEC : International Electrotechnical Commission

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
15
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

criterion, which includes usefulness, trust, pleasure and  Website Analysis Measurement Inventory
comfort [21]. (WAMMI) [39]
Despite these different ways of defining usability, there  Usefulness, Satisfaction and Ease of use (USE)
is a common understanding that the scope of usability [40]
includes the evaluation of effectiveness, efficiency,  Expectation Ratings (ER) [41]
satisfaction or the absence of usability problems [4].
Moreover, the evaluation can be classified as formative or  Website Usability Evaluation tool (WEBUSE)
summative. According to Hartson et al. [22], formative [42]
evaluation focuses on usability problems that need to be  Usability Magnitude Estimation (UME) [43]
solved during the prototype design stage before a final  Mobile Phone Usability Questionnaire (MPUQ)
design can be accepted for release; and, summative [44]
evaluation is then conducted to evaluate the efficacy of  Single Ease Question (SEQ) ([45])
the final design or to compare competing design  Website Evaluation Questionnaire (WEQ) [46]
alternatives in terms of usability.  Subjective Mental Effort Question (SMEQ) [47]
Several research efforts have been undertaken to  Usability Metric for User Experience (UMUX)
perform HCI usability evaluation using subjective or [48]
objective methods [23]. While objective methods are
based on capturing analytic data without direct interaction  Standardized Universal Percentile Rank
with users, subjective evaluation methods are focused on Questionnaire (SUPR-Q) ([5])
capturing user attitudes and judgments across the  Design-oriented Evaluation of Perceived usability
perceived usability [25]. Some of the subjective methods (DEEP) [30]
are interviews [27], focus groups [28], and questionnaires  Turkish-Computer System Usability Questionnaire
(our focus in this paper). This last one is undoubtedly the (T-CSUQ) [50]
largest used subjective method since it is one of the least  Usability Metric for User Experience-LITE
expensive evaluation methods that can be used for (UMUX-LITE) [51]
collecting data about the perceived usability of user  Speech User Interface Service Quality
interfaces [8]. questionnaire (SUISQ) ([52])
 Alternate Usability (AltUsability) [6]
III. REVIEW OF STANDARDIZED USABILITY
QUESTIONNAIRES Starting with the first questionnaire which appeared in the
late 1980s, Table 1 shows the main characteristics of
A. Panorama of standardized usability questionnaires standardized usability questionnaires considering:
used on HCI evaluation
(i) The date of creation from first to last version of
Questionnaires were introduced as a natural way to questionnaire;
discover issues related to users’ satisfaction ([14]). (ii) Global reliability degree using coefficient alpha2;
Generally, standardized usability questionnaires have been (iii) Kind of interface or the software system with
proposed to provide a more reliable measure of the which the questionnaire can be applied;
perceived usability ([1]). In this section, we present a (iv) Questionnaire items number;
summary review of the most widely used and validated (v) The items styles (question and/or sentence);
standardized questionnaires in the evaluation of usability (vi) Questionnaire output; and
of user interfaces. (vii) Item scales (either Likert scale [54], Semantic
We found 24 questionnaires based on the main digital differential scale [55], etc.).
libraries (ACM, IEEE Xplore, Direct science, Elsevier, and
Springer Link), as follows: From this table, some notable conclusions can be made.
We note that 71% (17 from 24) of questionnaires can be
 Questionnaire for User Interface Satisfaction applied to the evaluation of all types of interfaces (e.g.
(QUIS) [31] WIMP, Web, etc.) and they are addressed to computer
 Technology Acceptance Model software in general. Seven questionnaires support the
questionnaire (TAM) [32]. evaluation of specific interfaces: five concern the web
 After-Scenario Questionnaire (ASQ) [33]. applications; one (SUISQ) dedicated to interactive voice
 Computer System Usability Questionnaire response applications, and the last one (MPUQ) concerns
(CSUQ) [34] mobile applications. Regarding the degree of reliability, all
questionnaires have indicated good levels involving
 Post-Study System Usability Questionnaire Cronbach alpha scores varying between 0.80 and 0.97.
(PSSUQ) [35].
 Software Usability Measurement Inventory
(SUMI)([36])
 System Usability Scale (SUS) [37] 2
Coefficient alpha or Cronbach alpha: a fundamental element of
psychometric assessment proposed by Nunnally [56]. It is a measure of
 Purdue Usability Testing Questionnaire (PUTQ) internal consistency (reliability) that can range from 0 (completely
[38] unreliable) to 1 (perfectly reliable). The minimal acceptable value of
scores calculated from the average of ratings from a questionnaire is
equal to 0.7 [56].

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
16
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

Table 1. Key Proprieties of Existing Standardized Usability Questionnaires


Questionnaire GR Kind of user Items Items Outputs Items scales
interface or system number styles
QUIS 0.94 Products and 27: Version 5 sentence Results can be imported into statistical 10 point
19882011 computer software programs and spreadsheets. semantic scale

TAM 0.94 Products and 12 sentence No information 7 point semantic


1989 0.98 software scale
ASQ 0.96 Computer software 3 sentence The results are calculated using the 7 point Likert
19901995 average score between the seven points scale
of the scale
PSSUQ 0.94 Computer systems 18: Version 1 sentence Results are calculated using the average 7 point Likert
19922002 19: Version 2 score between the seven points of the scale
16: Version 3 scale.
SUMI 0.92 Software 50:Version 4 sentence The results are calculated using 3 point
19932011 applications SUMISCO program which generate a dichotomous scale
csv file output
CSUQ 0.95 Computer systems 18: Version 1 sentence Results are calculated using the average 7 point Likert
19952002 19: Version 2 score between the seven points of the scale
16: Version 3 scale.
SUS 0.92 Computer software 10 sentence Overall value of SUS=sum the scores of 5 point Likert
1996 all items and multiply it by 2.5. Scores scale
are ranges from 0 to 100.
PUTQ Not Information 100 question No information 7 point Likert
1997 pub. systems scale
WAMMI 0.90 Any kind of 20 sentence Results are reported in graphical format 5 point Likert
19982000 websites scale

USE Not Products and 30 sentence No information 7 point Likert


2001 Pub. computer software scale
ER Not Computer software 2 sentence Results are presented with a graph that Likert scale:
2003 Pub. indicates scatter plot of the returned 7 point : Version 1
scores 5 point: Version 2
WEBUSE >0.8 All types of 24 sentence A report indicating the aspect of 5 point Likert
2003 websites usability, the level for each criterion and scale
the average score.
UME Not Computer software 1 sentence The results are calculated using a UME scale
2003 Pub. or product mathematical formula related to the between a rating
UME. of 1 to 100
MPUQ 0.96 Mobile phone 72 question The outputs are based on an Analytic 7 point Likert
2005 applications Hierarchy Process analysis including scale
into developed decision-making models.
SEQ >0.94 Computer software 1 sentence The results are calculated using the Likert scale:
2006 average score between the (5 or 7) 5 point : Version 1
points of scale 7 point: Version 2
WEQ 0.97 websites of the 32 sentence Results are presented in a report 5 point Likert
2007 governmental including the analysis of users' scale
organizations comments for their scores.
SMEQ >0.94 Computer software 1 question No information Graduated scale
2009 from 0 to 150
UMUX 0.94 Computer software 4 sentence To obtain the overall score of UMUX; 7 point Likert
2010 sum the four items, divide by 24, and scale
then multiply by 100.
SUPR-Q 0.94 Websites interfaces 13 question Results reported a comparison between Likert scale:
2011 and the returned scores and other websites' 5 point : Version 1
sentence scores and it provides relative rankings 11 point: Version
expressed as percentages. 2
DEEP 0.95 The information- 19 sentence No information 5 point Likert
2012 intensive web scale
systems
T-CSUQ 0.85 Computer systems 13 sentence The results are calculated using the 7 point Likert
2013 average score between the 7 points scale scale
UMUX-LITE 0.82/ Computer software 2 sentence Score of UMUX-LITE 7 point Likert
2013 0.83 = [(Item1+Item2)-2]*(100/12) scale
Scores are ranged from 0 to 100.
SUISQ 2008 0.93 Interactive voice 25: Version 1 sentence The results are calculated using the 5 point Likert
SUIQ-R 2015 response average rating scale
0.88 applications 14: Version 2
AltUsability 0.9 Computer software 7 sentence Overall value of AltUsability 7 point Likert
2015 =[∑(Item1, Item2, Item3, Item4, Item5, scale
Item6, Item7) -7]*(100/42)
Scores are ranged from 0 to 100.
GR=Global Reliability Not Pub. = Not published

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
17
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

The majority of questionnaires (17 from 24) have very A recent survey [3] about the widely used standardized
high levels of reliability ranging between 0.9 and 0.97, usability questionnaires in the HCI literature divided them
while three questionnaires have levels less than 0.90 and into three categories: the post-study questionnaires, the
four other (PUTQ, USE, ER and UME) are validated post-task questionnaires, and website usability
without their precise Cronbach alpha values. WEQ has the questionnaires. The post-study questionnaires (first
highest level equal to 0.97. However, this questionnaire is category) are used at the end of a study especially after
dedicated only to the evaluation of web interfaces. The completing a set of test scenarios. The post-task
second highest level of reliability concerns MPUQ (0.96). questionnaires (second category) are for a more contextual
It also represents the only specific standardized evaluation used immediately at the end of each task or
questionnaire for mobile applications. Nevertheless, it has scenario in a usability study. The last category included
a large number of items (i.e. 72). In fact, we note also that specific questionnaires dedicated to evaluating web
the two questionnaires ASQ and CSUQ have high levels applications such as WAMMI, SUPR-Q [3]. Among the
of reliability equal to 0.96 and 0.95, respectively. These best known post-study questionnaires, we found QUIS,
questionnaires are characterized by a reduced number of SUMI, PSSUQ and SUS [3]. With regard to the post-task
items compared to others equal to 3 and 19, respectively. questionnaires, we found ASQ, SEQ, SMEQ, ER and
Furthermore, SUMI and QUIS indicated high reliability UME [3].
levels equal to 0.94 and 0.92, respectively. Another significant work that includes a review of
Regarding the outputs of the questionnaires, different standardized usability questionnaires was presented by
presentations of results have been proposed (e.g. graphic Yang et al. [30]. They identified three types of
form, number, spreadsheets, and CSV files). Furthermore, questionnaires depending on the kind of the evaluated
there are various ways for calculating the results that system. It concerns universal perceived usability
depend on questionnaire scales such as the averaging questionnaires, perceived usability questionnaires for
method used by questionnaires which adopted Likert websites, and perceived usability questionnaires for mobile
scales, the SUMISCO analysis program used by SUMI applications. Universal questionnaires have included those
questionnaire, etc. Regarding scales used by applicable to assess any type of electronic products (e.g.
questionnaires, the Likert scale represents the most USE, CSUQ, TAM, QUIS, SUS, and PUTQ). However,
common method that characterizes the majority of the two other types are specific to questionnaires for
questionnaires. This scale was adopted by 80% of assessing websites (e.g. WAMMI) and mobile applications
questionnaires using a variety of points (3, 5, 7, 10 or 11), (e.g. MPUQ), respectively. We have classified the 24
whereas 20% of questionnaires are focused on other types standardized questionnaires found in the literature
of scales, such as dichotomous scale (e.g. SUMI) and according to both categories as presented in Table 2.
semantic scale (e.g. QUIS). Henceforth, we use the term “specific standardized
Moreover, some other questionnaires have also been usability questionnaires” to refer to the questionnaires
proposed. Those are used under different evaluation specific for mobile, website or other specific kinds of
contexts for the evaluation of software systems and applications (as the case of SUISQ/ SUIQ-R questionnaire
products. For instance, AttrackDiff [57] is an instrument which concerns interactive voice response applications)
to evaluate numerous aspects of the user experience such presented in Table 2. As shown in the table, we found in
as the attraction to a product through the technique of total 17 universal questionnaires which have been applied
word pairs. The Service User experience is another in the usability evaluation of several kinds of software
questionnaire used to assess the capabilities of modern applications. Some examples of those are presented in
web services in promoting and supporting a positive and Table 3.
engaging user experience [58]. As proposed by
McNamara and Kirakowski [59], the Consumer Products
Questionnaire allows measuring user-satisfaction with C. Comparing standardized usability questionnaires
electronic consumer products.
More recently, Lewis and Mayes [60] have introduced When reviewing the HCI literature, we found some
the Emotional Metric Outcomes (EMO) questionnaire as a studies that have conducted direct comparisons between
standardized instrument for assessing the emotional various standardized usability questionnaires [3]. These
outcomes. It aims specifically to measure the effect of studies have concerned only nine universal
customer interaction, either with human or digital questionnaires, which are SUS, QUIS, CSUQ, UMUX,
services. Nevertheless, it concerns a more specific UMUX-Lite, AltUsability, SEQ, UME and SMEQ.
measurement context (large sample unmoderated usability Various studies ([6]) are focused more on comparing
study). New research has recommended the use of EMO SUS with other usability questionnaires and the
questionnaire under the user experience as a measurement investigation of correlation between them. This can be
that can complement the existing standardized usability justified by the fact that SUS presents an industry
questionnaires ([7]). standard described as “quick and dirty”, frequently used
by a large number of usability studies and has been
referenced in over 600 publications ([4]). Nevertheless, it
B. Existing classifications of standardized usability is more useful to perform a quick general usability
questionnaires assessment than discovering usability problems with
comprehensive view [37].

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
18
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

Table 2. Classifications of Questionnaires


Questionnaire Questionnaires based on Yang et al. classification Questionnaires based on Sauro and Lewis
[30] classification [3]

Universal For Website For Mobile Post-study Post-task For Website


QUIS X X
TAM X X
ASQ X X
PSSUQ X X
SUMI X X
CSUQ X X
SUS X X
PUTQ X X
WAMMI X X
USE X X
ER X X
WEBUSE X X
UME X X
MPUQ X X
WEQ X X
SMEQ X X
SEQ X X
UMUX X X
SUPR-Q X X
DEEP X X
T-CSUQ X X
UMUX-LITE X X
SUISQ/ SUIQ-R X
AltUsability X X
Total number 17 (from 24) 5 (from 24) 1 (from 24) 14 (from 24) 5 (from 24) 5 (from 24)

Table 3. Examples of Software Applications that have Applied


Universal Questionnaires Words 3 , and Ours 4 ). The reported analysis results have
Questionnaire Examples of software applications shown that SUS was the fastest questionnaire to converge
QUIS Vending Machine [61]; educational software on the correct conclusion and also, it has reliable results
[62] across all used sample sizes. Furthermore, two recent
TAM Virtual learning systems [63]; Augmented reality studies ([6]) have investigated the correlation between
applications [64] SUS and other questionnaires. The first compared SUS
ASQ Nursing information systems [65]; office
application systems [33]
with UMUX-LITE and AltUsability, and the second
PSSUQ Research information systems [66] compared SUS with UMUX and UMUX-LITE.
SUMI Product Data Management System[67]; As a consequence, the results of the two studies
WebCost applications [68] reported high correlation and correspondences between
CSUQ Virtual learning systems [63]; e-learning systems them (for more details see ([6]). Two other significant
with Virtual Reality [69]; Students’ information comparative studies are reported by Tedesco and Tullis
system [70] [45] and Sauro and Dumas [47]. Those studies have
SUS Serious games [71]; Augmented reality software
[72]
concerned the five post-task standardized usability
PUTQ Recommender systems (Travel support system) questionnaires [3]. In these studies, authors have focused
[73] on determining the most sensitive questionnaire by
USE Robotic telepresence system [74] measuring their sensitivities.5
ER Intranet site application [45] In the study conducted by Tedesco and Tullis [45], five
UME Information systems (travel application) [47] questionnaires (SEQ-V1, SEQ-V2, ASQ, ER and SEQ-
SMEQ Information systems (travel application) [47] V3) were compared. Analysis has shown good results for
SEQ Intranet site application [45]
UMUX e-learning applications [75]
all questionnaires with the larger sample size. However, it
T-CSUQ Web-based course management system [50] showed that SEQ-V1 was more sensitive using the smaller
UMUX-LITE e-learning applications [75] samples sizes [45]. In the second study, Sauro and Dumas
compared SEQ with the two questionnaires SMEQ and
In the study conducted by Tullis and Stetson [76] for UME. Using small samples sizes (< 5), analyses indicated
assessing the usability of websites, SUS was compared
with four questionnaires (QUIS, CSUQ, 3
Adapted from Microsoft’s Product Reaction Cards [77]
4
Questionnaire for assessing website usability [76]
5
The capability of a standardized usability questionnaire to indicate
significant difference between systems.

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
19
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

that questionnaires were insensitive with a very little usability (EasyNav, AbleFind, familiar, need, efficient,
difference between them. For sample sizes greater than control and appeal).
five, results revealed that SMEQ had the best percentage We observe also that ER, SMEQ and SEQ are the
of significant t-tests but was nevertheless insufficient shortest questionnaires (with only 1 item) that only cover
([3]). These two studies used both SMEQ and SEQ ([47]). a general issue concerns overall ease of task completion.
We may conclude that the two studies [6] and [76] Concerning specific usability questionnaires, we can see
consolidated the use of UMUX, UMUX-LITE, and Alt from Table 5 that they cover 38 quality issues. However,
Usability in addition to SUS. We could also say that both we found that not all of them are different; in fact, these
SEQ and SMEQ may be more useful than the other post– issues can vary in terms of the terminologies used (e.g.
task questionnaires since they are more sensitive learnability (WAMMI) and ease of learning (MPUQ)).
compared to ASQ, ER and UME ([47]). We distinguished that some questionnaires concern
However, we note that the other questionnaires listed in general quality issues, for example we quote SUPR-Q,
Table 1 have not been addressed in comparison studies. which covers four issues including usability and
We argue that more comparisons between questionnaires appearance. Some others are addressed more specifically.
are needed, considering not only direct comparisons or As an example, we note that Navigation related to DEEP
sensitivity measures but also quality issues treated by the questionnaire is restrained by WEQ questionnaire as a
questionnaires for supporting the choice of the most function of five sub-criteria (user friendliness, structure,
adequate one. hyperlinks, speed and search option). Some other
questionnaires have combined sub-criteria into a single
criterion as is the cases of WEBUSE (content,
D. Existing quality issues of standardized usability organization and readability), DEEP (Structure and
questionnaires Information Architecture), and MPUQ (Control and
Efficiency, Ease of Learning and Use).
By analyzing the standardized usability questionnaires, We can conclude that, although the literature indicates
we identified that they explicitly identify different quality several quality issues that are addressed by the
issues. In some papers, these issues mean quality criteria questionnaires, the majority of them are related to general
(such as satisfaction, usability, efficiency). In others, they issues and do not explicitly state which item covers the
correspond to features of the user interface (such as screen quoted quality issues to better support decision-making.
factors, links, layout, etc.). We identify each of these Moreover, it should be interesting to make these quality
issues as summarized in Tables 4 and 5. The first table issues uniform by traditional quality criteria defined by
concerns universal questionnaires and the second includes standards and known guidelines. Believing that a detailed
the specific questionnaires. analysis of items of questionnaires is essential to better
As shown in Table 4, we found in total 30 existing support the choice of a questionnaire that addresses better
quality issues of universal usability questionnaires. We the quality requirements of the specific system being
note also from this table that the following quality issues evaluated, we analyzed the 24 usability questionnaires
(system usefulness, usability, overall ease of task against known quality criteria as presented in next section.
completion and overall system) are the most frequent
criteria considered by the questionnaires. Those criteria IV. ANALYSIS OF STANDARDIZED USABILITY
concern general issues of quality. For example, the CSUQ QUESTIONNAIRES BASED ON COMMON STANDARD
questionnaire deals with four general criteria: overall USABILITY CRITERIA
system, system usefulness, information quality, and To perform our analysis, we decide to take into account
interface quality. two largely used usability set of criteria defined in
Furthermore, we note that some questionnaires have literature: those proposed by the standard ISO 9241-11
focused on measuring more specific issues including standard [19] (effectiveness, efficiency and satisfaction),
PUTQ, SUMI and AltUsability. PUTQ covered eight and ergonomic criteria ((e.g. control, compatibility,
issues (compatibility, learnability, consistency, flexibility, consistency, flexibility, minimal action, minimal memory
minimal action, minimal memory load, perceptual load, user guidance, etc.). For the ergonomic criteria, we
limitation and user guidance), but has 100 items, being the decide to use those proposed by Scapin and Bastien [17].
longest instrument we found. SUMI is the second longest To complete the ergonomic criteria we decide to use also it
instrument with 50 items. It covers five issues of quality to the usability criteria defined by ISO/WD 9241-112 [78].
(learnability, efficiency, affect, helpfulness, control). This standard concerns the ergonomic design principles for
Practitioners and researchers should pay attention if the interactive systems related to the presentation of
big number of items in a questionnaire can affect user information that are useful for the design and evaluation of
opinions before performing evaluation [30]. AltUsability all types of user interfaces [78].
[6] is a recent instrument focus on more specific issues of

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
20
Table 4. Quality Issues of Universal Standardized Usability Questionnaires

Quality issues QUIS TAM PSSUQ AltUsa T- SUMI SUS PUTQ USE UMUX UMUX- ASQ ER UME SMEQ SEQ
/ CSUQ bility CSUQ LITE
Satisfaction X X
Overall reaction to the X X X X
software / Overall system
Screen factors X
Terminology and system X
information
(Ease of) Learning factors / X X X X X
(Learnability )
System capabilities X
Ease of use / Usability X X X X
System usefulness X X X X X X
Information quality X X X
Interface quality X X X
Efficiency/ Efficient X X X
Affect X
Helpfulness X
Control X X
Compatibility X
Consistency X
Flexibility X
Minimal action X
Minimal memory load X
Perceptual limitation X
User guidance X
Effectiveness X
Overall ease of task X X X X X
completion
Satisfaction with completion X
time
Satisfaction with support X
information
Easy Navigation X
Able Find X
Familiar X
Need X
Appeal X

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
21
Table 5. Quality Issues of Specific Standardized Usability Questionnaires

Quality issues WAMMI WEBUSE WEQ SUPR-Q DEEP MPUQ SUISQ


Attractiveness X
Controllability X
Efficiency X
Learnability X
Helpfulness X
Content, organization, and readability X
Navigation and links X
User interface design X
Performance and effectiveness X
Content- Relevance X
Content- Comprehensibility X
Content- Comprehensiveness X
Navigation - User friendliness (ease of use) X
Navigation - Structure X
Navigation- Hyperlinks X
Navigation - Speed X
Navigation - Search option X
Layout X
Appearance X
Loyalty X
Usability X
Trust X
Content X
Structure and Information Architecture X
Navigation X
Cognitive Effort X
Layout Consistency X
Visual Guidance X
Ease of Learning and Use X
Helpfulness and Problem Solving Capabilities X
Affective Issue and Multimedia Properties X
Commands and Minimal Memory Load X
Control and Efficiency X
Typical Task for Mobile Phone X
User Goal Orientation X
Customer Service Behavior X
Speech Characteristics factor X
Verbosity X

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
22
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

Table 6. Existing Standardized Usability Criteria for Assessing User Interfaces


ISO 9241-11 Scapin and Bastien / AFNOR ergonomic criteria ISO/WD 9241-112 [78]
[19] [17]

Effectiveness Guidance (prompting , grouping and distinguishing Detectability


Efficiency items, immediate feedback, legibility) Discriminability
Satisfaction Workload (brevity, information density) Appropriateness
Explicit control (explicit user actions, user control) Consistency
Adaptability (flexibility, users’ experience) Comprehensibility
Error management (error protection, quality of error
messages, error correction)
Consistency
Significance of codes
Compatibility

Based on these criteria we performed an analysis of all of our analysis, due to its more specific criteria for
questionnaires with the goal to specify clearly which interactive voice response applications. It concerns more
usability criteria of the three groups are covered and are the usability of service quality and addressed several
not covered by the standardized questionnaires. About criteria (friendliness, politeness of the system, speaking
475 questionnaires items have been analyzed according to pace, use of familiar terms, naturalness, enthusiasm of the
the selected lists of usability criteria. system voice, talkativeness and repetitiveness of the
system). As a conclusion, analysis results are synthesized
in Table 9. Further, we have presented in Table 10 some
A. Analysis results for universal standardized usability examples of items per criterion.
questionnaires

For each questionnaire, we have followed a detailed C. Discusssion


analysis per item against criteria list for the three groups.
We have mainly relied on the meaning of the item to The majority of the most used standardized usability
associate to each usability criterion it is more related to. questionnaires (e.g. SUMI, SUS, QUIS, CSUQ, etc.)
For example, the following item “I can effectively covered general quality issues. The goal of this analysis
complete my work using this system” (extracted from was to provide practitioners some support about the
CSUQ questionnaire) is more related to the effectiveness specific usability criteria covered by the items of these
criterion. We used this process to analyze all items of the questionnaires. This identification can be useful to select
questionnaires. Therefore, we present the results in Table the appropriate questionnaire according to the quality
7 that show the analysis results performed for the requirements of the system. As shown in Tables 7 and 9,
universal questionnaires. Nevertheless, we have excluded these analyses have identified specific usability criteria
four questionnaires (SEQ, SMEQ, ER, and UME) since covered by universal and specific usability questionnaires.
they contain only one item that concerns the criterion of We note that the majority of universal questionnaires
overall ease of task completion. Table 8 presents some cover more than 6 of 15 usability criteria (see Figure 1).
examples of items of our analysis per criterion. For example, we found that the five issues addressed by
We are aware that some criteria are usually interrelated SUMI cover all mentioned usability criteria.
and that some items are related to several criteria. For Concerning the specific standardized usability
example, we assigned the following item: “The questionnaires (Figure 2), we found that most of them
organization of information on the system screens is (WAMMI, WEBUSE, WEQ and DEEP) cover more than
clear” (extracted from CSUQ, PSSUQ, and T-CSUQ) to 7 of 15 usability criteria in addition to their current
both criteria: discriminability and guidance (see Table 8). specific quality issues. Further, we note that the only
A second item example, “Do the commands have questionnaire dedicated for mobile user interfaces
distinctive meanings?”(extracted from PUTQ) is related (MPUQ) covers all the considered usability criteria.
to both criteria of comprehensibility of information Subsequently, this analysis provides us the most
presented and the significance of codes. addressed usability criteria by the standardized
questionnaires. For universal questionnaires, we note that
the three criteria of ISO 9241-11 (effectiveness,
B. Analysis results for specific standardized usability efficiency, and satisfaction) are the best considered,
questionnaires including each in 9 questionnaires (Figure 3). Also, this
analysis has shown that 8 of 12 universal standardized
Following the same analysis process per item as described questionnaires have been addressed to guide criterion
previously, we have analyzed all items of specific selection instead of just a single questionnaire (PUTQ) as
standardized usability questionnaires. For example, we described in Table 4. For specific questionnaires, we
assigned the following item from the MPUQ observe that all of them cover the following criteria:
questionnaire: “Are the error messages effective in efficiency, discriminability, appropriateness, workload
assisting you to fix problems?” to the error management and guidance (Figure 4).
criterion. As an exception, we have excluded the SUISQ

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
23
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

Table 7. Analysis Results for Universal Standardized Usability Questionnaires

Usability criteria QUIS TAM PSSUQ CSUQ AltUsability T-CSUQ SUMI SUS PUTQ USE UMUX UMUX-Lite ASQ

ISO Effectiveness X X X X X X X X X
9421-11
criteria Efficiency X X X X X X X X X

Satisfaction X X X X X X X X X

Detectability X X X X
ISO/WD
9241-112 Discriminability X X X X X X
criteria
Appropriateness X X X X

Consistency X X X X X

Comprehensibility X X X X

Guidance X X X X X X X X

Scapin and Workload X X X X X X


Bastien
criteria Explicit control X X X

Adaptability X X X X X X

Error management X X X X X X X

Consistency X X X X X

Significance of codes X X X

Compatibility X X

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
24
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

Table 8. Item Examples of Universal Usability Questionnaires as a Function of Standard Usability Criteria

Usability criteria Items Questionnaire


ISO 9241- Effectiveness Tasks can be performed in a straight-forward manner QUIS
11 criteria I can effectively complete my work using this system CSUQ/PSSUQ/T-CSUQ
Tasks can be performed in a straight forward manner using this software SUMI
I can use it successfully every time USE
[This system’s] capabilities meet my requirements ([48],[81]) UMUX/UMUX-Lite
Overall, I am satisfied with the ease of completing the tasks in this scenario ASQ
[34]
Efficiency I am able to complete my work quickly using this system CSUQ/PSSUQ/T-CSUQ
I found the system very cumbersome to use SUS
This software responds too slowly to inputs SUMI
I have to spend too much time correcting things with [this system] [81] UMUX
Overall, I am satisfied with the amount of time it took to complete the tasks in ASQ
this scenario [34]
Using [this product] in my job would enable me to accomplish tasks more TAM
quickly
This system helps me to do my job more efficiently AltUsability
Satisfaction Overall reactions to the software frustrating satisfying QUIS
Overall, I am satisfied with this system CSUQ/PSSUQ/T-CSUQ
I think that I would like to use this system SUS
Working with this software is satisfying SUMI
I am satisfied with it USE
Using [this system] is a frustrating experience ([48],[81]) UMUX
Overall, I am satisfied with the ease of completing the tasks in this scenario ASQ
ISO/WD Detectability Characters on the computer screen (hard to read,…, easy to read) QUIS
9241-112 Either the amount or quality of the help information varies across the system SUMI
criteria Are selected data highlighted? PUTQ
My interaction with [this product] would be clear and understandable TAM
Discriminability Organization of information on screen (confusing,…, very clear) QUIS
The organization of information on the system screens is clear CSUQ/PSSUQ/T-CSUQ
The way that system information is presented is clear and understandable SUMI
Are menus distinct from other displayed information? PUTQ
Appropriateness I found the various functions in the system were well integrated SUS
The software documentation is very informative SUMI
Are data items kept short? PUTQ
Comprehensibility My interaction with [this product] would be clear and understandable TAM
I can understand and act on the information provided by this software SUMI
Do the commands have distinctive meanings? PUTQ
Messages on screen which prompt user for input (confusing,...,clear) QUIS
Consistency (of Use of terms throughout system (inconsistent,…, consistent) QUIS
information presented) I thought there was too much inconsistency in this system SUS
I think this software is inconsistent SUMI
Is the display format consistent? PUTQ
I don't notice any inconsistencies as I use it USE
Scapin Guidance Highlighting on the screen simplifies task QUIS
and Computer keeps you informed about what it is doing ( never always) QUIS
Bastien Help messages on the screen (unhelpful,…, helpful) QUIS
criteria The information (such as online help, on-screen messages and other CSUQ/PSSUQ/T-CSUQ
documentation) provided with this system is clear
The organization of information on the system screens is clear CSUQ/PSSUQ/T-CSUQ
The organization of the menus seems quite logical SUMI
It makes the things I want to accomplish easier to get done USE
Are groups of information demarcated? PUTQ
Is the guidance information always available? PUTQ
Is HELP provided? PUTQ
Overall, I am satisfied with the support information (online help, messages, ASQ
documentation) when completing the tasks
Workload It is easy to find the information I needed CSUQ/PSSUQ/T-CSUQ
There is never enough information on the screen when it's needed SUMI
There are too many steps required to get something to work SUMI
It requires the fewest steps possible to accomplish what I want to do with it USE
Is the screen density reasonable? PUTQ
Explicit I feel in command of this software when I am using it SUMI
control Does it provide CANCEL option? PUTQ
I feel in control when I work within this system AltUsability
Consistency (of Use of terms throughout system (inconsistent,…, consistent) QUIS
interface design I thought there was too much inconsistency in this system SUS
choices) I think this software is inconsistent SUMI
Is the display format consistent? PUTQ
I don't notice any inconsistencies as I use it USE

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
25
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

Table 8 (cont.). Item Examples of Universal Usability Questionnaires as a Function of Standard Usability Criteria

Usability criteria Items Questionnaire


Adaptability Experienced and inexperienced users' needs are taken into consideration QUIS
It is obvious that user needs have been fully taken into consideration SUMI
It is easy to make the software do exactly what you want SUMI
I would find it easy to get [this product] to do what I want it to do. TAM
I would find [this product] to be flexible to interact with TAM
It is flexible USE
Both occasional and regular users would like it USE
Does system provide good training for different users? PUTQ
Can user name displays and elements according to their needs? PUTQ
This system offers capabilities familiar to me AltUsability
Error Error messages (unhelpful,…, helpful) /Correcting your mistakes difficult QUIS
management The system gives error messages that clearly tell me how to fix problems CSUQ/PSSUQ/T-CSUQ
Whenever I make a mistake using the system, I recover easily and quickly CSUQ/PSSUQ/T-CSUQ
Error messages are not adequate SUMI
I can recover from mistakes quickly and easily USE
Are erroneous entries displayed? PUTQ
Are error messages non-disruptive/informative? PUTQ
Significance of codes Computer terminology is related to the task you are doing QUIS
I sometimes wonder if I am using the right function SUMI
Are the command names meaningful? PUTQ
Do the commands have distinctive meanings? PUTQ
Compatibility The software hasn't always done what I was expecting SUMI
Is the control of cursor compatible with movement? PUTQ
Are the results of control entry compatible with user expectations? PUTQ

Table 9. Analysis Results for the Specific Standardized Usability Questionnaires

Usability criteria WAMMI WEBUSE WEQ SUPR-Q DEEP MPUQ


Effectiveness X X
ISO 9421-11
criteria Efficiency X X X X X X

Satisfaction X X X

Detectability X X X X X
ISO/WD 9241-112
criteria Discriminability X X X X X X

Appropriateness X X X X X X

Consistency X X X

Comprehensibility X X X X X

Guidance X X X X X X
Scapin and Bastien
criteria Workload X X X X X X

Consistency X X X

Explicit control X X X

Adaptability X

Error management X X

Significance of codes X X X X X

Compatibility X

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
26
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

Table 10. Item Examples of Specific Usability Questionnaires as a Function of Standard Usability Criteria
Usability criteria Items Questionnaire
ISO 9241- Effectiveness It is efficient to use this website. WEBUSE
11 criteria Does the product support the operation of all the tasks in a way that you find useful? MPUQ
Efficiency I need not wait too long to download a file or open a page WEBUSE
I think it takes a long time to download a new web page from this site. WEQ
I am able to find what I need quickly on this website SUPR-Q
I could quickly get to know the structure of the website by skimming its home page. DEEP
Does this product enable the quick, effective, and economical performance of tasks? MPUQ
Satisfaction I don't like using this website WAMMI
I enjoy using the website SUPR-Q
Do you feel excited when using this product? MPUQ
ISO/WD Detectability This website helps me find what I am looking for WAMMI
9241-112 Reading content at this website is easy WEBUSE
criteria Placement of links or menu is standard throughout the website and I can easily WEBUSE
recognize them
It is clear which hyperlink will lead to the information I am looking for. WEQ
The wording of the text was clear. DEEP
The highlighted areas of a page helped me locate the information I needed DEEP
Are the characters on the screen easy to read? MPUQ
Discriminability This website seems logical to me. WAMMI
The content of this website is well organized WEBUSE
I find the structure of this website clear WEQ
The website has a clean and simple presentation SUPR-Q
Under each section of the website, the web pages were well organized. DEEP
Is the organization of information on the product screen clear? MPUQ
Appropriateness I can quickly find what I want on this website. WAMMI
I can easily find what I want at this website WEBUSE
I find the information in this website precise WEQ
The information on this website is valuable SUPR-Q
It was easy to find the information I needed on the website DEEP
Is the amount of information displayed on the screen adequate? MPUQ
Comprehensibility Everything on this website is easy to understand WAMMI
I am comfortable and familiar with the language used WEBUSE
I find the information in this website easy to understand WEQ
The content (including text, pictures, audios, and videos etc.) was easy to understand DEEP
Is the interface with this product clear and understandable? MPUQ
Consistency (of This website has a consistent feel and look WEBUSE
information presented) The layout under each section of the website was consistent DEEP
Is the data display sufficiently consistent? MPUQ
Scapin and Guidance This website helps me find what I am looking for. WAMMI
Bastien This website always provides clear and useful messages when I don’t know how to WEBUSE
criteria proceed
I can easily know where I am at this website WEBUSE
I always know where I am on this website WEQ
The website has a clean and simple presentation SUPR-Q
This website helped me find what I was looking for DEEP
Is the backlighting feature for the keyboard and screen helpful? MPUQ
Workload I can quickly find what I want on this website WAMMI
I can easily find what I want at this website WEBUSE
I find the information in this website precise WEQ
The information on this website is valuable SUPR-Q
It was easy to find the information I needed on the website DEEP
Are data items kept short? MPUQ
Explicit control I feel in control when I'm using this website. WAMMI
It is easy to move around at this website by using the links or back button of the WEBUSE
browser
Can you regulate, control, and operate the product easily? MPUQ
Consistency (of This website has a consistent feel and look WEBUSE
interface design The layout under each section of the website was consistent DEEP
choices) Is the data display sufficiently consistent? MPUQ
Adaptability Have the user needs regarding this product been sufficiently taken into MPUQ
consideration?
Error management This website does not contain too many web advertisements WEBUSE
Are the messages aimed at prevent you from making mistakes adequate? MPUQ
Are the error messages effective in assisting you to fix problems? MPUQ
Significance of codes I get what I expect when I click on things on this website WAMMI
I am comfortable and familiar with the language used WEBUSE
I find many words in this website difficult to understand WEQ
I got what I expected when I clicked on things on this website DEEP
Are the command names meaningful? MPUQ
Compatibility Are the color coding and data display compatible with familiar conventions? MPUQ

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
27
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

Figure 1. Universal standardized Usability Questionnaire: Quality issues and Usability Criteria

Figure 2. Specific standardized Usability Questionnaire: Quality issues and Usability Criteria

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
28
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

Figure 3. Number of Universal Questionnaires by Usability Criteria

Figure 4. Number of Specific Questionnaires by Usability Criteria

To improve these instruments and make them more useful


V. CONCLUSION AND NEW PERSPECTIVES in the detection of usability problems, it is essential to
In this paper, our purpose was to review existing provide more support in the interpretation of their results.
standardized usability questionnaires to give more support With the advancement of technology, intelligent support
to practitioners and researchers when choosing proved to be very interesting. Technologies such as expert
appropriate usability questionnaires. These questionnaires systems, knowledge-based systems or agents should be
have been standardized as a function of their reliability explored in this direction.
and validity measures and compared on the basis of their The use of questionnaires can be complemented by
sensitivity degrees. The similarity of these measures several usability methods (such as, inspection methods,
cannot provide the support required to select simulation) to perform a complete usability evaluation.
questionnaires. Furthermore, general quality issues Several research works also share our perspective. They
characterizing these questionnaires make difficult their have largely recommended and investigated the
use for detecting users’ perceptions about specific combination of several usability evaluation methods. We
usability problems. In this review, we have focused on are currently working on the integration of questionnaires
studying questionnaires’ items based on the main known with objective usability measures extracted from different
usability criteria in literature. We emphasize some evaluation methods such as task completion, overall
perspectives for further research about usability density of a user interface, etc.
questionnaires. Finally, with the development and emergence of new
technologies, usability questionnaires are required to be

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
29
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

adaptable for evaluating new kinds of interactive systems [22] H.R. Hartson, T.S. Andre, and R.C. Will, “Criteria for evaluating
(e.g. ubiquitous systems, tangible systems). usability evaluation methods,n” International Journal of Human-
Computer Interaction, vol.15, no. 1, 2003,pp.145-181.
REFERENCES [23] H. Thoma, “A system for subjective evaluation of audio, video and
audio visual quality using mushra and samviq methods,” Consumer
[1] K. Hornbæk, “Current practice in measuring usability: Challenges Communications and Networking Conference (CCNC),Las Vegas,
to usability studies and research,” International Journal of Human- 2012, pp. 337–341.
Computer Studies, vol. 64, no. 2, 2006, pp.79–102.
[24] K. Kunze, and D. Strohmeier, “Examining subjective evaluation
[2] J.R. Lewis, and J.Sauro, “The factor structure of the System methods used in multimedia quality of experience research,”
Usability Scale,” in: M. Kurosu,(Ed.), Human Centered Design, Quality of Multimedia Experience (QoMEX), Fourth International
HCII 2009 Springer-Verlag, Heidelberg, Germany, 2009,pp. 94- Workshop on Yarra Valley, VIC, 2012, pp. 51-56.
103.
[25] A. Assila, H. Ezzedine, and M.S. Bouhlel, “A Web questionnaire
[3] J. Sauro, and J. R. Lewis, “Quantifying the User Experience generating tool to aid for interactive systems quality subjective
Practical Statistics for User Research,”Elsevier, 2012, ISBN: 978- assessment,” IEEE International Conference on Control, Decision
0-12-384968-7. and Information Technologies, Hammamet, Tunisia, pp. 1-7.
[4] J. R. Lewis, “Usability: Lessons Learned . . . and Yet to Be [26] N. Nishiuchi, Y. Takahashi, “Objective Evaluation Method of
Learned,” International Journal of Human-Computer Interaction, Usability Using Parameters of User’s Fingertip Movement,”
vol. 30, no. 9, 2014, pp. 663–684. Transactions on Computational Science XXV, Lecture Notes in
[5] J. Sauro, “SUPR-Q: A Comprehensive Measure of the Quality of Computer Science, vol.9030,2015,pp 77-89.
the Website User Experience,” Journal of usability studies, vol.10 , [27] H. Olsen, “An evaluation of danish qualitative interview
no. 2, 2015, pp. 68-86. investigations,” Nordisk Psykologi, vol.54, no.2, 2002, pp. 145-
[6] J. R. Lewis, B.S. Utesch, and D.E. Maher, “Measuring Perceived 172.
Usability: The SUS, UMUX-LITE, and AltUsability,” International [28] J. Nielsen, “The use and misuse of focus groups,”Software IEEE,
Journal of Human-Computer Interaction, vol.31, no.8, 2015, vol. 14, no.1, 1997, pp.94-95.
pp.496-505.
[29] R. Hartson, and P.S. Pyla, “The UX Book: Process and guidelines
[7] J. R. Lewis, J. Brown, and D.K. Mayes, “Psychometric Evaluation for ensuring a quality user experience,”Amesterdam, the
of the EMO and the SUS in the context of a Large-Sample Netherlands: Morgan Kaufmann, Elsevier,2012, ISBN: 978-0-12-
Unmoderated Usability study,” International Journal of Human- 385241-0.
Computer Interaction, vol.31, no.8, 2015, pp. 445-553.
[30] T. Yang, J. Linder, and D.Bolchini, “DEEP: Design-Oriented
[8] K. Hamborg, B. Vehse, and H. Bludau, “Questionnaire based Evaluation of Perceived Usability,” International Journal of Human
usability evaluation of hospital information systems,” Electronic Computer Interaction, vol.28, no.5, 2012, pp. 308-346.
journal of Information Systems Evaluation,vol. 7, no.1, 2004,
[31] J.P.Chin, V.A. Diehl, K.L. Norman, “Development of an
pp.21–30.
instrument measuring user satisfaction of the human–computer
[9] J. Preece, Y. Rogers, H. Sharp, D. Benyon, S. Holland, and interface,”in Proceedings of CHI 1988, ACM, Washington, DC,
T.Carey, “Human-Computer Interaction,” Essex, England: 1988, pp. 213-218.
Addison-Wesley Longman Limited, 1994.
[32] F.D. Davis, “Perceived Usefulness, Percevied Ease of Use, and
[10] A.Seffah, M.Donyaee, R. Kline, and H. Padda, “Usability User Acceptance of Information Technology,” MIS Quarterly,
measurement and metrics : A consolidated model,”Software vol.13, 1989, pp.319-340.
Quality Journal, vol.14, no.2, 2006,pp. 159-178.
[33] J.R. Lewis, J.R, “Psychometric evaluation of an after-scenario
[11] D.Alonso-Rios, A. Vazquez-Garcia,E. Mosqueira-Rey, and questionnaire for computer usability studies: The ASQ,” ACM
V.Moret-Bonillo, “Usability: A critical analysis and taxonomy,” SIGCHI Bulletin, vol.23, no.1, 1991, pp.78–81.
International Journal of Human-Computer Interaction, vol.26,2010,
[34] J.R. Lewis, “IBM computer usability satisfaction questionnaires:
pp.53-74
Psychometric evaluation and instructions for use,” International
[12] B.Shackel,“Usability–context,framework,design and evaluation,”in Journal of Human Computer Interaction, vol. 7, 1995, pp.57–78.
B.Shackel, and S. Richardson,(eds.): Human Factors for
[35] J.R. Lewis, “Psychometric evaluation of the Post-Study system
Informatics Usability, Cambridge University Press, Cambridge,
usability questionnaire: The PSSUQ,”in Proceedings of the Human
1991, pp. 21-38.
Factors Society 36th Annual Meeting, Human Factors Society,
[13] B. Shneiderman, “Designing the user interface (2nd edition): Santa Monica, CA, 1992,pp. 1259-1263.
strategies for effective human computer interaction,”Addison-
[36] J. Kirakowski, and M. Corbett, “SUMI: The software usability
Wesley Longman Publishing Corporation, Inc., Boston, MA, USA,
measurement inventory,” British Journal of Educational
1992.
Technology, vol.24, 1993, pp.210–212.
[14] J. Nielsen, “Usability engineering,”Academic Press, Boston, 1993.
[37] J. Brooke, “SUS: A “quick and dirty” usability scale,” in P.
[15] M. van Welie, G. van der Veer, and A. Eliens, “Breaking down Jordan,B.Thomas, and B. Weerdmeester,(Eds.):Usability
Usability,” in M. Sasse and C. Johnson, eds: Proceedings of Evaluation in Industry, Taylor & Francis, London, 1996, pp. 189-
INTERACT 99’, Edinburgh, Scotland, 1999, pp. 613–620. 194.
[16] J. Nielsen, “Usability laboratories [Special issue],” Behavior and [38] H.X Lin, Y.Y. Choong, and G.Salvendy, “A Proposed Index of
Information Technology, vol. 13, 1994. Usability: A Method for Comparing the Relative Usability of
[17] D. L. Scapin, and J. M.C.Bastien, “Ergonomic criteria for Different Software Systems Usability Evaluation Methods,”
evaluating the ergonomic quality of interactive systems,” Behavior and Information Technology, vol.16, no.4/5, 1997,
Behaviour and Information Technology, vol.16, 1997, pp. 220-231. pp.267-278.
[18] B.Shneiderman, “Tree maps for space-constrained visualization of [39] J.Kirakowski, and B. Cierlik, “Measuring the usability of
hierarchies,”Human Computer Interaction Lab, University of websites,” in Proceedings of the Human Factors and Ergonomics
Maryland,1998. Society 42nd Annual Meeting HFES, Santa Monica, CA, 1998, pp.
[19] ISO 9241-11, “Ergonomic requirements for office work with visual 424–428.
display terminals (VDT)s- Part 11 Guidance on usability,” 1998. [40] A. Lund, “Measuring usability with the USE questionnaire,”
[20] ISO/IEC 9126-1, “Software engineering -- Product quality -- Part Usability and User Experience Newsletter, STC Usability SIG,
1: Quality model,”2001. vol.8, no.2, 2001, pp.1–4.
[21] ISO/IEC 25010, “Systems and software engineering — Systems [41] W. Albert, and E. Dixon, “Is this what you expected? The use of
and software Quality Requirements and Evaluation (SQuaRE) — expectation measures in usability testing,” in Proceedings of
System and software quality models,” International Organization Usability Professionals Association 2003 Conference, Scottsdale,
for Standardization, Geneva, Switzerland, 2011. AZ, June2003.

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
30
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016

[42] K.T. Chiew, S.S. Salim, “WEBUSE: Website usability evaluation [63] K. Milis, P. Wessa, S. Poelmans, C. Doom, and E. Bloemen, “The
tools,” Malaysian Journal of Computer Science, vol.16, no.1, 2003, Impact Of Gender On The Acceptance Of Virtual Learning
pp.47-57. Environments,” KU Leuven Association, Belgian, 2008.
[43] M. McGee, “Usability magnitude estimation,” in Proceedings of [64] T. Chandrasekera, “Using Augmented Reality Prototypes in Design
the Human Factors and Ergonomics Society 47th Annual Meeting, Education,” Design and Technology Education: an international
HFES, Santa Monica, CA, 2003, pp. 691-695. Journal, vol.19, no.3, 2014, pp.33-42.
[44] Y.S. Ryu, and T.L. Smith-Jackson, “Usability Questionnaire Items [65] J. Liaskos, J. Mantas, “Measuring the user Acceptance of a Web-
for Mobile Products and Content Validity,” in Proceedings of HCI Based Nursing Documentation System,” Methods Informatics
International, Las Vegas, 2005, pp. 22-27. Medical, vol. 45, no.1, 2006, pp.116-120
[45] D.P. Tedesco, and T.S Tullis, “A comparison of methods for [66] C. Debruyne and P.De Leenheer, “Using a Method and Tool for
eliciting post-task subjective ratings in usability testing,” Hybrid Ontology Engineering: an Evaluation in the Flemish
Proceedings of the Usability Professionals Association Conference, Research Information Space,” Journal of Theoretical and Applied
Broomfield, CO, 2006. Electronic Commerce Research, vol.9.no.2, 2014, pp.48-63.
[46] S. Elling, L. Lentz, and M. Jong, “Website Evaluation [67] E. Van Veenendaal, “Questionnaire based usability testing,” in
Questionnaire: Development of a Research-Based Tool for Conference Proceedings European Software Quality Week,
Evaluating Informational Websites,” in M.A. Wimmer, H.J. Scholl Brussels, November 1998.
and A. Grönlund (Eds.) Lecture Notes in Computer Science, [68] Z.Mansor, Z.M.Kasirun, S.Yahya, N.H.Arshad, “The Evaluation of
vol.4656 ,Springer-Verlag, Berlin Heidelberg, 2007,pp. 293-304. WebCost Using Software Usability Measurement Inventory
[47] J.Sauro, and J.S. Dumas, “Comparison of three one-question, post- (SUMI),” International Journal of Digital Information and Wireless
task usability questionnaires,” in Proceedings of CHI 2009, ACM, Communications, vol.2, no.2, 2012, pp.197-201.
Boston. 2009, pp. 1599-1608. [69] G. McArdle, “Exploring the Use of 3D Collaborative Interfaces for
[48] K. Finstad, “The usability metric for user experience,” Interacting E-Learning,” Horia-Nicolai Teodorescu, Junzo Watada, and
with Computers, vol.22, no.5, 2010, pp.323–327. Lakhmi C. Jain (Eds.): Intelligent Systems and Technologies,
[49] J. Sauro, “The Standardized Universal Percentile Rank Spring-Verlag Berlin Heidelberg, 2009, pp.249-270.
Questionnaire (SUPR-Q),” 2011, Available at www.suprq.com/. [70] N.M. Rusli, S. Hassan, and N.E.Liau, “Usability Analysis of
[50] O. Erdinç, and J.R. Lewis, “Psychometric Evaluation of the T- Students Information System in a Public University,” Journal of
CSUQ : The Turkish Version of the Computer System Usability Emerging Trends in Engineering and Applied Sciences, vol.4, no.6,
Questionnaire” International Journal of Human and Computer 2013, pp.806-810
Interaction, vol.29, no.5, 2013,pp. 319-326. [71] R.De Asmundis, “An evaluation model to measure impact and
[51] J.R. Lewis, B.S. Utesch, and D.E. Maher, D.E, “UMUX-LITE – usability of a serious game,” Master’s thesis, University of Bari
When There’s No Time for the SUS,” Proceedings of the SIGCHI Aldo Moro, 2014.
Conference on Human Factors in Computing Systems, 2013, pp. [72] M.E.C. Santos, J.Polvi, T. Taketomi, G. Yamamoto, C.Sandor, and
2099-2102. H. Kato, “Toward Standard Usability Questionnaires for Handheld
[52] M.D.Polkosky, “Machines as mediators: The challenge of Augmented Reality,” IEEE Computer Graphics and Applications,
technology for interpersonal communication theory and research,” vol.35, no.5, 2015, pp.50-59.
in E.Kojin (Ed.):Mediated interpersonal communication,New [73] A.H. Zins, U.Bauernfeind, F.D.Missier, N.Mitsche, F.Ricci,
York: Routledge, 2008, pp.34-57. H.Rumetshofer, and E. Schaumlechner, “Prototype Testing for a
[53] J.R. Lewis, and M.L.Hardzinski, “Investigating the psychometric Destination Recommender System: Steps, Procedures and
properties of the Speech User Interface Service Quality Implications,” in Proceedings of Enter 2004, Cairo, Springer
questionnaire” International Journal of Speech Technology, vol. Verlag, 2004, pp. 249-258.
18, no.3,2015, pp.479-487. [74] A.Kiselev and A.Loutfi, “Using a Mental Workload Index as a
[54] R. Likert, “A Technique for the Measurement of Attitudes,” Measure of Usability of a User Interface for Social Robotic
Archives of Psychology, vol.140, 1932, pp.1–55. Telepresence,” in Proceedings of the Ro-Man Workshop on Social
Robotic Telepresence, 2012, pp. 3-6.
[55] J. G.Snider, and C.E. Osgood. “Semantic Differential Technique,”
A Sourcebook. Chicago: Aldine, 1969. [75] S.Borsci, S.Federici, M.Gnaldi, S.Bacci, and F.Bartolucci,
“Assessing user Satisfaction in the era of User Experience:
[56] J.C Nunnally, “Psychometric theory,” (3rd Revied edition), New
Comparaison of the SUS, UMUX, and UMUX-LITE as a function
York: McGraw-Hill, 1993, ISBN: 978-0070478497.
of Product Experience,” International Journal of Human-Computer
[57] M. Hassenzahl, M. Burmester, and F. Koller, “AttrakDiff: Interaction, vol.31, no.8,2015, pp.484-495.
questionnaire to measure perceived hedonic and pragmatic
[76] T.S. Tullis, and J.N.Stetson, “A comparison of questionnaires for
quality,”in: J.Ziegler and G. Szwillus (Hrsg.), Mensch and
assessing website usability,” Proceedings of the Usability
Computer Interaktion, Bewegung, 2003, pp. 187-196.
Professionals Association Conference, 1-12, Minneapolis,Minn,
[58] K.Väänänen-Vainio-Mattila, and K.Segerståhl, “A Tool for USA, June 2004.
Evaluating Service User eXperience (ServUX): Development of a [77] J.Benedek, and T. Miner, “Measuring desirability: New methods
Modular Questionnaire,” in User Experience Evaluation Methods for evaluating desirability in a usability lab setting,” Usability
in Product Development (UXEM'09),Workshop in Interact'09 Professionals Association Conference, Orlando, July 2002.
conference, Uppsala, Sweden, 2009.
[78] ISO/WD 9241-112, “Ergonomics of human-system interaction —
[59] N. McNamara, J. Kirakowski, “Measuring user-satisfaction with — Part 112:Principles for the presentation of information,”2013.
electronic consumer products: The consumer products
questionnaire,” International Journal of Human-Computer Studies, [79] ISO 9241-110, “Dialogue principles: Ergonomics of human-system
vol. 69, no.6, 2011, pp.375 -386. interaction -- Part 110,”2006.
[60] J.R. Lewis, and D.K.Mayes,“Development and psychometric [80] J.M.C. Bastien, and L. Scapin, “Evaluation des systèmes
evaluation of the Emotional Metric Outcomes (EMO) d'information et critères ergonomiques,” in C.Kolski(Ed.),
questionnaire,” International Journal of Human-Computer Environnement évolués et évaluation de l’IHM, vol.2, Paris:
Interaction,vol.30, 2014,pp. 685-702. Hermès, 2001, pp. 53-80.
[61] H. S. Naeini and S. Mostowfi, “Using QUIS as Measurement Tool [81] N. Kerzazi, and M. Lavallée, “Inquiry on usability of two software
for User Satisfaction Evaluation (Case study: Vending Machine),” process modeling systems using ISO/IEC 9241,” in Electrical and
International Journal of Information Science, vol. 5, no.1, 2015, pp. Computer Engineering (CCECE), 2011, pp.773–776.
14-23.
[62] G.K.Akilli, “User satisfaction evaluation of an educational
website,” The Turkish Online Journal of Educational Technology,
vol.4, no.1, 2005, pp.85-92.

A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
31

You might also like