Assila Etal 2016 Standardized Usability Questionnaires
Assila Etal 2016 Standardized Usability Questionnaires
1, 2016
Abstract – For the last few decades more than twenty information that support them in selecting the appropriate
standardized usability questionnaires for evaluating software tool according to their requirements.
systems have been proposed. These instruments have been The remainder of this paper is structured as follows. In
widely used in the assessment of usability of user interfaces. section 2, we present briefly the fundamental usability
They have their own characteristics, can be generic or address concepts. Then in section 3, we review the most validated
specific kinds of systems and can be composed of one or several standardized usability questionnaires used for assessing
items. Some comparison or comparative studies were also user interfaces based on the literature. In section 4, we
conducted to identify the best one in different situations. All
describe our analysis of the questionnaire items based on
these issues should be considered while choosing a
common standard usability criteria. Subsequently, we
questionnaire. In this paper, we present an extensive review of
these questionnaires considering their key features, some
present a discussion. Finally, we provide a conclusion and
classifications and main comparison studies already performed. we draw some perspectives in section 5.
Moreover, we present the result of a detailed analysis of all
items being evaluated in each questionnaire to indicate those that
can identify users’ perceptions about specific usability problems.
II. USABILITY EVALUATION
This analysis was performed by confronting each questionnaire Usability evaluation has been well-defined and well-
item (around 475 items) with usability criteria proposed by studied. Preece et al. indicated that usability is a basic
quality standards (ISO 9421-11 and ISO/WD 9241-112) and concept in HCI and its main purpose is to make systems
classical quality ergonomic criteria. easy to use and learn [9]. Over the last few decades,
several usability definitions concerning specific criteria
Keywords – Human-Computer Interaction; user interfaces; have been published in the HCI literature [10]. According
evaluation; usability; Standardized questionnaire. to Shackel usability is “the capability to be used by
humans easily and effectively” and associated with five
criteria, i.e. effectiveness, learnability, retention, error and
I. INTRODUCTION attitude [12]. Another significant definition is given by
It is common sense that usability evaluation has a great Schneiderman [13] who defined usability as “a relation of
importance on Human-Computer Interaction (HCI). When effectiveness and efficiency of user interface and user’s
talking about the usability evaluation, we address the reaction to that interface” [13].
proposed methods and models of the evaluation. A similar usability definition, which differs only in
Considering the large number of usability evaluation terminology, is stated by Nielsen [14] and includes five
methods, standardized usability questionnaires are criteria, i.e. efficiency, learnability, memorability,
valuable tools intended for the assessment of perceived errors/safety and satisfaction [15]. Other than these
usability [1]. By gathering user perceptions about user definitions, several lists of design principles, heuristics,
interfaces, questionnaires can help to identify usability ergonomic rules and measures for quality criteria have
flaws for making improvements and measure user been proposed [10]. These studies aimed to provide the
satisfaction [3]. In the literature, various standardized necessary guidelines and measures to be used for
usability questionnaires have been proposed (see [3]). To evaluating user interfaces and identifying usability
choose the best one for each situation, it is important to problems.
know information about their key features, composed Several international standards have also stated
items, studies and classifications already performed. We usability definitions [19]. The ISO 9241-11 defined it as
argue also that despite the fact the questionnaire is usually “the extent to which a product can be used by specified
defined to address general issues (usability and users to achieve specified goals with effectiveness,
usefulness), it is also relevant to identify which specific efficiency and satisfaction in a specified context of use”
issue they can capture about the user interface. [19]. This definition associated three criteria
In light of this, we present in this paper a review of 24 (effectiveness, efficiency and satisfaction). More recently,
standardized usability questionnaires by summarizing their the ISO/IEC1 25010 [21] known as the SQUARE standard
key features, the classifications and main comparison (Systems and Software Quality Requirements and
studies already performed. We then emphasize a review of Evaluation) has included the ISO 9241-11 usability issues
the questionnaires according to specific related usability into a model characterized by five criteria, i.e.
criteria. To that end, an analysis of all the items was effectiveness, efficiency, satisfaction, freedom from risk
performed against each usability criteria proposed from the and context coverage. In turn, these criteria have
best known quality standards (ISO 9241-11 and ISO/WD separated into sub-criteria; for example the satisfaction
9241-112) and classical quality ergonomic criteria. Our
goal is to provide practitioners and HCI researchers useful
1
IEC : International Electrotechnical Commission
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
15
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
criterion, which includes usefulness, trust, pleasure and Website Analysis Measurement Inventory
comfort [21]. (WAMMI) [39]
Despite these different ways of defining usability, there Usefulness, Satisfaction and Ease of use (USE)
is a common understanding that the scope of usability [40]
includes the evaluation of effectiveness, efficiency, Expectation Ratings (ER) [41]
satisfaction or the absence of usability problems [4].
Moreover, the evaluation can be classified as formative or Website Usability Evaluation tool (WEBUSE)
summative. According to Hartson et al. [22], formative [42]
evaluation focuses on usability problems that need to be Usability Magnitude Estimation (UME) [43]
solved during the prototype design stage before a final Mobile Phone Usability Questionnaire (MPUQ)
design can be accepted for release; and, summative [44]
evaluation is then conducted to evaluate the efficacy of Single Ease Question (SEQ) ([45])
the final design or to compare competing design Website Evaluation Questionnaire (WEQ) [46]
alternatives in terms of usability. Subjective Mental Effort Question (SMEQ) [47]
Several research efforts have been undertaken to Usability Metric for User Experience (UMUX)
perform HCI usability evaluation using subjective or [48]
objective methods [23]. While objective methods are
based on capturing analytic data without direct interaction Standardized Universal Percentile Rank
with users, subjective evaluation methods are focused on Questionnaire (SUPR-Q) ([5])
capturing user attitudes and judgments across the Design-oriented Evaluation of Perceived usability
perceived usability [25]. Some of the subjective methods (DEEP) [30]
are interviews [27], focus groups [28], and questionnaires Turkish-Computer System Usability Questionnaire
(our focus in this paper). This last one is undoubtedly the (T-CSUQ) [50]
largest used subjective method since it is one of the least Usability Metric for User Experience-LITE
expensive evaluation methods that can be used for (UMUX-LITE) [51]
collecting data about the perceived usability of user Speech User Interface Service Quality
interfaces [8]. questionnaire (SUISQ) ([52])
Alternate Usability (AltUsability) [6]
III. REVIEW OF STANDARDIZED USABILITY
QUESTIONNAIRES Starting with the first questionnaire which appeared in the
late 1980s, Table 1 shows the main characteristics of
A. Panorama of standardized usability questionnaires standardized usability questionnaires considering:
used on HCI evaluation
(i) The date of creation from first to last version of
Questionnaires were introduced as a natural way to questionnaire;
discover issues related to users’ satisfaction ([14]). (ii) Global reliability degree using coefficient alpha2;
Generally, standardized usability questionnaires have been (iii) Kind of interface or the software system with
proposed to provide a more reliable measure of the which the questionnaire can be applied;
perceived usability ([1]). In this section, we present a (iv) Questionnaire items number;
summary review of the most widely used and validated (v) The items styles (question and/or sentence);
standardized questionnaires in the evaluation of usability (vi) Questionnaire output; and
of user interfaces. (vii) Item scales (either Likert scale [54], Semantic
We found 24 questionnaires based on the main digital differential scale [55], etc.).
libraries (ACM, IEEE Xplore, Direct science, Elsevier, and
Springer Link), as follows: From this table, some notable conclusions can be made.
We note that 71% (17 from 24) of questionnaires can be
Questionnaire for User Interface Satisfaction applied to the evaluation of all types of interfaces (e.g.
(QUIS) [31] WIMP, Web, etc.) and they are addressed to computer
Technology Acceptance Model software in general. Seven questionnaires support the
questionnaire (TAM) [32]. evaluation of specific interfaces: five concern the web
After-Scenario Questionnaire (ASQ) [33]. applications; one (SUISQ) dedicated to interactive voice
Computer System Usability Questionnaire response applications, and the last one (MPUQ) concerns
(CSUQ) [34] mobile applications. Regarding the degree of reliability, all
questionnaires have indicated good levels involving
Post-Study System Usability Questionnaire Cronbach alpha scores varying between 0.80 and 0.97.
(PSSUQ) [35].
Software Usability Measurement Inventory
(SUMI)([36])
System Usability Scale (SUS) [37] 2
Coefficient alpha or Cronbach alpha: a fundamental element of
psychometric assessment proposed by Nunnally [56]. It is a measure of
Purdue Usability Testing Questionnaire (PUTQ) internal consistency (reliability) that can range from 0 (completely
[38] unreliable) to 1 (perfectly reliable). The minimal acceptable value of
scores calculated from the average of ratings from a questionnaire is
equal to 0.7 [56].
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
16
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
17
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
The majority of questionnaires (17 from 24) have very A recent survey [3] about the widely used standardized
high levels of reliability ranging between 0.9 and 0.97, usability questionnaires in the HCI literature divided them
while three questionnaires have levels less than 0.90 and into three categories: the post-study questionnaires, the
four other (PUTQ, USE, ER and UME) are validated post-task questionnaires, and website usability
without their precise Cronbach alpha values. WEQ has the questionnaires. The post-study questionnaires (first
highest level equal to 0.97. However, this questionnaire is category) are used at the end of a study especially after
dedicated only to the evaluation of web interfaces. The completing a set of test scenarios. The post-task
second highest level of reliability concerns MPUQ (0.96). questionnaires (second category) are for a more contextual
It also represents the only specific standardized evaluation used immediately at the end of each task or
questionnaire for mobile applications. Nevertheless, it has scenario in a usability study. The last category included
a large number of items (i.e. 72). In fact, we note also that specific questionnaires dedicated to evaluating web
the two questionnaires ASQ and CSUQ have high levels applications such as WAMMI, SUPR-Q [3]. Among the
of reliability equal to 0.96 and 0.95, respectively. These best known post-study questionnaires, we found QUIS,
questionnaires are characterized by a reduced number of SUMI, PSSUQ and SUS [3]. With regard to the post-task
items compared to others equal to 3 and 19, respectively. questionnaires, we found ASQ, SEQ, SMEQ, ER and
Furthermore, SUMI and QUIS indicated high reliability UME [3].
levels equal to 0.94 and 0.92, respectively. Another significant work that includes a review of
Regarding the outputs of the questionnaires, different standardized usability questionnaires was presented by
presentations of results have been proposed (e.g. graphic Yang et al. [30]. They identified three types of
form, number, spreadsheets, and CSV files). Furthermore, questionnaires depending on the kind of the evaluated
there are various ways for calculating the results that system. It concerns universal perceived usability
depend on questionnaire scales such as the averaging questionnaires, perceived usability questionnaires for
method used by questionnaires which adopted Likert websites, and perceived usability questionnaires for mobile
scales, the SUMISCO analysis program used by SUMI applications. Universal questionnaires have included those
questionnaire, etc. Regarding scales used by applicable to assess any type of electronic products (e.g.
questionnaires, the Likert scale represents the most USE, CSUQ, TAM, QUIS, SUS, and PUTQ). However,
common method that characterizes the majority of the two other types are specific to questionnaires for
questionnaires. This scale was adopted by 80% of assessing websites (e.g. WAMMI) and mobile applications
questionnaires using a variety of points (3, 5, 7, 10 or 11), (e.g. MPUQ), respectively. We have classified the 24
whereas 20% of questionnaires are focused on other types standardized questionnaires found in the literature
of scales, such as dichotomous scale (e.g. SUMI) and according to both categories as presented in Table 2.
semantic scale (e.g. QUIS). Henceforth, we use the term “specific standardized
Moreover, some other questionnaires have also been usability questionnaires” to refer to the questionnaires
proposed. Those are used under different evaluation specific for mobile, website or other specific kinds of
contexts for the evaluation of software systems and applications (as the case of SUISQ/ SUIQ-R questionnaire
products. For instance, AttrackDiff [57] is an instrument which concerns interactive voice response applications)
to evaluate numerous aspects of the user experience such presented in Table 2. As shown in the table, we found in
as the attraction to a product through the technique of total 17 universal questionnaires which have been applied
word pairs. The Service User experience is another in the usability evaluation of several kinds of software
questionnaire used to assess the capabilities of modern applications. Some examples of those are presented in
web services in promoting and supporting a positive and Table 3.
engaging user experience [58]. As proposed by
McNamara and Kirakowski [59], the Consumer Products
Questionnaire allows measuring user-satisfaction with C. Comparing standardized usability questionnaires
electronic consumer products.
More recently, Lewis and Mayes [60] have introduced When reviewing the HCI literature, we found some
the Emotional Metric Outcomes (EMO) questionnaire as a studies that have conducted direct comparisons between
standardized instrument for assessing the emotional various standardized usability questionnaires [3]. These
outcomes. It aims specifically to measure the effect of studies have concerned only nine universal
customer interaction, either with human or digital questionnaires, which are SUS, QUIS, CSUQ, UMUX,
services. Nevertheless, it concerns a more specific UMUX-Lite, AltUsability, SEQ, UME and SMEQ.
measurement context (large sample unmoderated usability Various studies ([6]) are focused more on comparing
study). New research has recommended the use of EMO SUS with other usability questionnaires and the
questionnaire under the user experience as a measurement investigation of correlation between them. This can be
that can complement the existing standardized usability justified by the fact that SUS presents an industry
questionnaires ([7]). standard described as “quick and dirty”, frequently used
by a large number of usability studies and has been
referenced in over 600 publications ([4]). Nevertheless, it
B. Existing classifications of standardized usability is more useful to perform a quick general usability
questionnaires assessment than discovering usability problems with
comprehensive view [37].
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
18
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
19
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
that questionnaires were insensitive with a very little usability (EasyNav, AbleFind, familiar, need, efficient,
difference between them. For sample sizes greater than control and appeal).
five, results revealed that SMEQ had the best percentage We observe also that ER, SMEQ and SEQ are the
of significant t-tests but was nevertheless insufficient shortest questionnaires (with only 1 item) that only cover
([3]). These two studies used both SMEQ and SEQ ([47]). a general issue concerns overall ease of task completion.
We may conclude that the two studies [6] and [76] Concerning specific usability questionnaires, we can see
consolidated the use of UMUX, UMUX-LITE, and Alt from Table 5 that they cover 38 quality issues. However,
Usability in addition to SUS. We could also say that both we found that not all of them are different; in fact, these
SEQ and SMEQ may be more useful than the other post– issues can vary in terms of the terminologies used (e.g.
task questionnaires since they are more sensitive learnability (WAMMI) and ease of learning (MPUQ)).
compared to ASQ, ER and UME ([47]). We distinguished that some questionnaires concern
However, we note that the other questionnaires listed in general quality issues, for example we quote SUPR-Q,
Table 1 have not been addressed in comparison studies. which covers four issues including usability and
We argue that more comparisons between questionnaires appearance. Some others are addressed more specifically.
are needed, considering not only direct comparisons or As an example, we note that Navigation related to DEEP
sensitivity measures but also quality issues treated by the questionnaire is restrained by WEQ questionnaire as a
questionnaires for supporting the choice of the most function of five sub-criteria (user friendliness, structure,
adequate one. hyperlinks, speed and search option). Some other
questionnaires have combined sub-criteria into a single
criterion as is the cases of WEBUSE (content,
D. Existing quality issues of standardized usability organization and readability), DEEP (Structure and
questionnaires Information Architecture), and MPUQ (Control and
Efficiency, Ease of Learning and Use).
By analyzing the standardized usability questionnaires, We can conclude that, although the literature indicates
we identified that they explicitly identify different quality several quality issues that are addressed by the
issues. In some papers, these issues mean quality criteria questionnaires, the majority of them are related to general
(such as satisfaction, usability, efficiency). In others, they issues and do not explicitly state which item covers the
correspond to features of the user interface (such as screen quoted quality issues to better support decision-making.
factors, links, layout, etc.). We identify each of these Moreover, it should be interesting to make these quality
issues as summarized in Tables 4 and 5. The first table issues uniform by traditional quality criteria defined by
concerns universal questionnaires and the second includes standards and known guidelines. Believing that a detailed
the specific questionnaires. analysis of items of questionnaires is essential to better
As shown in Table 4, we found in total 30 existing support the choice of a questionnaire that addresses better
quality issues of universal usability questionnaires. We the quality requirements of the specific system being
note also from this table that the following quality issues evaluated, we analyzed the 24 usability questionnaires
(system usefulness, usability, overall ease of task against known quality criteria as presented in next section.
completion and overall system) are the most frequent
criteria considered by the questionnaires. Those criteria IV. ANALYSIS OF STANDARDIZED USABILITY
concern general issues of quality. For example, the CSUQ QUESTIONNAIRES BASED ON COMMON STANDARD
questionnaire deals with four general criteria: overall USABILITY CRITERIA
system, system usefulness, information quality, and To perform our analysis, we decide to take into account
interface quality. two largely used usability set of criteria defined in
Furthermore, we note that some questionnaires have literature: those proposed by the standard ISO 9241-11
focused on measuring more specific issues including standard [19] (effectiveness, efficiency and satisfaction),
PUTQ, SUMI and AltUsability. PUTQ covered eight and ergonomic criteria ((e.g. control, compatibility,
issues (compatibility, learnability, consistency, flexibility, consistency, flexibility, minimal action, minimal memory
minimal action, minimal memory load, perceptual load, user guidance, etc.). For the ergonomic criteria, we
limitation and user guidance), but has 100 items, being the decide to use those proposed by Scapin and Bastien [17].
longest instrument we found. SUMI is the second longest To complete the ergonomic criteria we decide to use also it
instrument with 50 items. It covers five issues of quality to the usability criteria defined by ISO/WD 9241-112 [78].
(learnability, efficiency, affect, helpfulness, control). This standard concerns the ergonomic design principles for
Practitioners and researchers should pay attention if the interactive systems related to the presentation of
big number of items in a questionnaire can affect user information that are useful for the design and evaluation of
opinions before performing evaluation [30]. AltUsability all types of user interfaces [78].
[6] is a recent instrument focus on more specific issues of
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
20
Table 4. Quality Issues of Universal Standardized Usability Questionnaires
Quality issues QUIS TAM PSSUQ AltUsa T- SUMI SUS PUTQ USE UMUX UMUX- ASQ ER UME SMEQ SEQ
/ CSUQ bility CSUQ LITE
Satisfaction X X
Overall reaction to the X X X X
software / Overall system
Screen factors X
Terminology and system X
information
(Ease of) Learning factors / X X X X X
(Learnability )
System capabilities X
Ease of use / Usability X X X X
System usefulness X X X X X X
Information quality X X X
Interface quality X X X
Efficiency/ Efficient X X X
Affect X
Helpfulness X
Control X X
Compatibility X
Consistency X
Flexibility X
Minimal action X
Minimal memory load X
Perceptual limitation X
User guidance X
Effectiveness X
Overall ease of task X X X X X
completion
Satisfaction with completion X
time
Satisfaction with support X
information
Easy Navigation X
Able Find X
Familiar X
Need X
Appeal X
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
21
Table 5. Quality Issues of Specific Standardized Usability Questionnaires
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
22
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
Based on these criteria we performed an analysis of all of our analysis, due to its more specific criteria for
questionnaires with the goal to specify clearly which interactive voice response applications. It concerns more
usability criteria of the three groups are covered and are the usability of service quality and addressed several
not covered by the standardized questionnaires. About criteria (friendliness, politeness of the system, speaking
475 questionnaires items have been analyzed according to pace, use of familiar terms, naturalness, enthusiasm of the
the selected lists of usability criteria. system voice, talkativeness and repetitiveness of the
system). As a conclusion, analysis results are synthesized
in Table 9. Further, we have presented in Table 10 some
A. Analysis results for universal standardized usability examples of items per criterion.
questionnaires
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
23
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
Usability criteria QUIS TAM PSSUQ CSUQ AltUsability T-CSUQ SUMI SUS PUTQ USE UMUX UMUX-Lite ASQ
ISO Effectiveness X X X X X X X X X
9421-11
criteria Efficiency X X X X X X X X X
Satisfaction X X X X X X X X X
Detectability X X X X
ISO/WD
9241-112 Discriminability X X X X X X
criteria
Appropriateness X X X X
Consistency X X X X X
Comprehensibility X X X X
Guidance X X X X X X X X
Adaptability X X X X X X
Error management X X X X X X X
Consistency X X X X X
Significance of codes X X X
Compatibility X X
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
24
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
Table 8. Item Examples of Universal Usability Questionnaires as a Function of Standard Usability Criteria
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
25
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
Table 8 (cont.). Item Examples of Universal Usability Questionnaires as a Function of Standard Usability Criteria
Satisfaction X X X
Detectability X X X X X
ISO/WD 9241-112
criteria Discriminability X X X X X X
Appropriateness X X X X X X
Consistency X X X
Comprehensibility X X X X X
Guidance X X X X X X
Scapin and Bastien
criteria Workload X X X X X X
Consistency X X X
Explicit control X X X
Adaptability X
Error management X X
Significance of codes X X X X X
Compatibility X
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
26
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
Table 10. Item Examples of Specific Usability Questionnaires as a Function of Standard Usability Criteria
Usability criteria Items Questionnaire
ISO 9241- Effectiveness It is efficient to use this website. WEBUSE
11 criteria Does the product support the operation of all the tasks in a way that you find useful? MPUQ
Efficiency I need not wait too long to download a file or open a page WEBUSE
I think it takes a long time to download a new web page from this site. WEQ
I am able to find what I need quickly on this website SUPR-Q
I could quickly get to know the structure of the website by skimming its home page. DEEP
Does this product enable the quick, effective, and economical performance of tasks? MPUQ
Satisfaction I don't like using this website WAMMI
I enjoy using the website SUPR-Q
Do you feel excited when using this product? MPUQ
ISO/WD Detectability This website helps me find what I am looking for WAMMI
9241-112 Reading content at this website is easy WEBUSE
criteria Placement of links or menu is standard throughout the website and I can easily WEBUSE
recognize them
It is clear which hyperlink will lead to the information I am looking for. WEQ
The wording of the text was clear. DEEP
The highlighted areas of a page helped me locate the information I needed DEEP
Are the characters on the screen easy to read? MPUQ
Discriminability This website seems logical to me. WAMMI
The content of this website is well organized WEBUSE
I find the structure of this website clear WEQ
The website has a clean and simple presentation SUPR-Q
Under each section of the website, the web pages were well organized. DEEP
Is the organization of information on the product screen clear? MPUQ
Appropriateness I can quickly find what I want on this website. WAMMI
I can easily find what I want at this website WEBUSE
I find the information in this website precise WEQ
The information on this website is valuable SUPR-Q
It was easy to find the information I needed on the website DEEP
Is the amount of information displayed on the screen adequate? MPUQ
Comprehensibility Everything on this website is easy to understand WAMMI
I am comfortable and familiar with the language used WEBUSE
I find the information in this website easy to understand WEQ
The content (including text, pictures, audios, and videos etc.) was easy to understand DEEP
Is the interface with this product clear and understandable? MPUQ
Consistency (of This website has a consistent feel and look WEBUSE
information presented) The layout under each section of the website was consistent DEEP
Is the data display sufficiently consistent? MPUQ
Scapin and Guidance This website helps me find what I am looking for. WAMMI
Bastien This website always provides clear and useful messages when I don’t know how to WEBUSE
criteria proceed
I can easily know where I am at this website WEBUSE
I always know where I am on this website WEQ
The website has a clean and simple presentation SUPR-Q
This website helped me find what I was looking for DEEP
Is the backlighting feature for the keyboard and screen helpful? MPUQ
Workload I can quickly find what I want on this website WAMMI
I can easily find what I want at this website WEBUSE
I find the information in this website precise WEQ
The information on this website is valuable SUPR-Q
It was easy to find the information I needed on the website DEEP
Are data items kept short? MPUQ
Explicit control I feel in control when I'm using this website. WAMMI
It is easy to move around at this website by using the links or back button of the WEBUSE
browser
Can you regulate, control, and operate the product easily? MPUQ
Consistency (of This website has a consistent feel and look WEBUSE
interface design The layout under each section of the website was consistent DEEP
choices) Is the data display sufficiently consistent? MPUQ
Adaptability Have the user needs regarding this product been sufficiently taken into MPUQ
consideration?
Error management This website does not contain too many web advertisements WEBUSE
Are the messages aimed at prevent you from making mistakes adequate? MPUQ
Are the error messages effective in assisting you to fix problems? MPUQ
Significance of codes I get what I expect when I click on things on this website WAMMI
I am comfortable and familiar with the language used WEBUSE
I find many words in this website difficult to understand WEQ
I got what I expected when I clicked on things on this website DEEP
Are the command names meaningful? MPUQ
Compatibility Are the color coding and data display compatible with familiar conventions? MPUQ
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
27
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
Figure 1. Universal standardized Usability Questionnaire: Quality issues and Usability Criteria
Figure 2. Specific standardized Usability Questionnaire: Quality issues and Usability Criteria
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
28
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
29
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
adaptable for evaluating new kinds of interactive systems [22] H.R. Hartson, T.S. Andre, and R.C. Will, “Criteria for evaluating
(e.g. ubiquitous systems, tangible systems). usability evaluation methods,n” International Journal of Human-
Computer Interaction, vol.15, no. 1, 2003,pp.145-181.
REFERENCES [23] H. Thoma, “A system for subjective evaluation of audio, video and
audio visual quality using mushra and samviq methods,” Consumer
[1] K. Hornbæk, “Current practice in measuring usability: Challenges Communications and Networking Conference (CCNC),Las Vegas,
to usability studies and research,” International Journal of Human- 2012, pp. 337–341.
Computer Studies, vol. 64, no. 2, 2006, pp.79–102.
[24] K. Kunze, and D. Strohmeier, “Examining subjective evaluation
[2] J.R. Lewis, and J.Sauro, “The factor structure of the System methods used in multimedia quality of experience research,”
Usability Scale,” in: M. Kurosu,(Ed.), Human Centered Design, Quality of Multimedia Experience (QoMEX), Fourth International
HCII 2009 Springer-Verlag, Heidelberg, Germany, 2009,pp. 94- Workshop on Yarra Valley, VIC, 2012, pp. 51-56.
103.
[25] A. Assila, H. Ezzedine, and M.S. Bouhlel, “A Web questionnaire
[3] J. Sauro, and J. R. Lewis, “Quantifying the User Experience generating tool to aid for interactive systems quality subjective
Practical Statistics for User Research,”Elsevier, 2012, ISBN: 978- assessment,” IEEE International Conference on Control, Decision
0-12-384968-7. and Information Technologies, Hammamet, Tunisia, pp. 1-7.
[4] J. R. Lewis, “Usability: Lessons Learned . . . and Yet to Be [26] N. Nishiuchi, Y. Takahashi, “Objective Evaluation Method of
Learned,” International Journal of Human-Computer Interaction, Usability Using Parameters of User’s Fingertip Movement,”
vol. 30, no. 9, 2014, pp. 663–684. Transactions on Computational Science XXV, Lecture Notes in
[5] J. Sauro, “SUPR-Q: A Comprehensive Measure of the Quality of Computer Science, vol.9030,2015,pp 77-89.
the Website User Experience,” Journal of usability studies, vol.10 , [27] H. Olsen, “An evaluation of danish qualitative interview
no. 2, 2015, pp. 68-86. investigations,” Nordisk Psykologi, vol.54, no.2, 2002, pp. 145-
[6] J. R. Lewis, B.S. Utesch, and D.E. Maher, “Measuring Perceived 172.
Usability: The SUS, UMUX-LITE, and AltUsability,” International [28] J. Nielsen, “The use and misuse of focus groups,”Software IEEE,
Journal of Human-Computer Interaction, vol.31, no.8, 2015, vol. 14, no.1, 1997, pp.94-95.
pp.496-505.
[29] R. Hartson, and P.S. Pyla, “The UX Book: Process and guidelines
[7] J. R. Lewis, J. Brown, and D.K. Mayes, “Psychometric Evaluation for ensuring a quality user experience,”Amesterdam, the
of the EMO and the SUS in the context of a Large-Sample Netherlands: Morgan Kaufmann, Elsevier,2012, ISBN: 978-0-12-
Unmoderated Usability study,” International Journal of Human- 385241-0.
Computer Interaction, vol.31, no.8, 2015, pp. 445-553.
[30] T. Yang, J. Linder, and D.Bolchini, “DEEP: Design-Oriented
[8] K. Hamborg, B. Vehse, and H. Bludau, “Questionnaire based Evaluation of Perceived Usability,” International Journal of Human
usability evaluation of hospital information systems,” Electronic Computer Interaction, vol.28, no.5, 2012, pp. 308-346.
journal of Information Systems Evaluation,vol. 7, no.1, 2004,
[31] J.P.Chin, V.A. Diehl, K.L. Norman, “Development of an
pp.21–30.
instrument measuring user satisfaction of the human–computer
[9] J. Preece, Y. Rogers, H. Sharp, D. Benyon, S. Holland, and interface,”in Proceedings of CHI 1988, ACM, Washington, DC,
T.Carey, “Human-Computer Interaction,” Essex, England: 1988, pp. 213-218.
Addison-Wesley Longman Limited, 1994.
[32] F.D. Davis, “Perceived Usefulness, Percevied Ease of Use, and
[10] A.Seffah, M.Donyaee, R. Kline, and H. Padda, “Usability User Acceptance of Information Technology,” MIS Quarterly,
measurement and metrics : A consolidated model,”Software vol.13, 1989, pp.319-340.
Quality Journal, vol.14, no.2, 2006,pp. 159-178.
[33] J.R. Lewis, J.R, “Psychometric evaluation of an after-scenario
[11] D.Alonso-Rios, A. Vazquez-Garcia,E. Mosqueira-Rey, and questionnaire for computer usability studies: The ASQ,” ACM
V.Moret-Bonillo, “Usability: A critical analysis and taxonomy,” SIGCHI Bulletin, vol.23, no.1, 1991, pp.78–81.
International Journal of Human-Computer Interaction, vol.26,2010,
[34] J.R. Lewis, “IBM computer usability satisfaction questionnaires:
pp.53-74
Psychometric evaluation and instructions for use,” International
[12] B.Shackel,“Usability–context,framework,design and evaluation,”in Journal of Human Computer Interaction, vol. 7, 1995, pp.57–78.
B.Shackel, and S. Richardson,(eds.): Human Factors for
[35] J.R. Lewis, “Psychometric evaluation of the Post-Study system
Informatics Usability, Cambridge University Press, Cambridge,
usability questionnaire: The PSSUQ,”in Proceedings of the Human
1991, pp. 21-38.
Factors Society 36th Annual Meeting, Human Factors Society,
[13] B. Shneiderman, “Designing the user interface (2nd edition): Santa Monica, CA, 1992,pp. 1259-1263.
strategies for effective human computer interaction,”Addison-
[36] J. Kirakowski, and M. Corbett, “SUMI: The software usability
Wesley Longman Publishing Corporation, Inc., Boston, MA, USA,
measurement inventory,” British Journal of Educational
1992.
Technology, vol.24, 1993, pp.210–212.
[14] J. Nielsen, “Usability engineering,”Academic Press, Boston, 1993.
[37] J. Brooke, “SUS: A “quick and dirty” usability scale,” in P.
[15] M. van Welie, G. van der Veer, and A. Eliens, “Breaking down Jordan,B.Thomas, and B. Weerdmeester,(Eds.):Usability
Usability,” in M. Sasse and C. Johnson, eds: Proceedings of Evaluation in Industry, Taylor & Francis, London, 1996, pp. 189-
INTERACT 99’, Edinburgh, Scotland, 1999, pp. 613–620. 194.
[16] J. Nielsen, “Usability laboratories [Special issue],” Behavior and [38] H.X Lin, Y.Y. Choong, and G.Salvendy, “A Proposed Index of
Information Technology, vol. 13, 1994. Usability: A Method for Comparing the Relative Usability of
[17] D. L. Scapin, and J. M.C.Bastien, “Ergonomic criteria for Different Software Systems Usability Evaluation Methods,”
evaluating the ergonomic quality of interactive systems,” Behavior and Information Technology, vol.16, no.4/5, 1997,
Behaviour and Information Technology, vol.16, 1997, pp. 220-231. pp.267-278.
[18] B.Shneiderman, “Tree maps for space-constrained visualization of [39] J.Kirakowski, and B. Cierlik, “Measuring the usability of
hierarchies,”Human Computer Interaction Lab, University of websites,” in Proceedings of the Human Factors and Ergonomics
Maryland,1998. Society 42nd Annual Meeting HFES, Santa Monica, CA, 1998, pp.
[19] ISO 9241-11, “Ergonomic requirements for office work with visual 424–428.
display terminals (VDT)s- Part 11 Guidance on usability,” 1998. [40] A. Lund, “Measuring usability with the USE questionnaire,”
[20] ISO/IEC 9126-1, “Software engineering -- Product quality -- Part Usability and User Experience Newsletter, STC Usability SIG,
1: Quality model,”2001. vol.8, no.2, 2001, pp.1–4.
[21] ISO/IEC 25010, “Systems and software engineering — Systems [41] W. Albert, and E. Dixon, “Is this what you expected? The use of
and software Quality Requirements and Evaluation (SQuaRE) — expectation measures in usability testing,” in Proceedings of
System and software quality models,” International Organization Usability Professionals Association 2003 Conference, Scottsdale,
for Standardization, Geneva, Switzerland, 2011. AZ, June2003.
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
30
electronic Journal of Computer Science and Information Technology (eJCSIT), Vol. 6, No. 1, 2016
[42] K.T. Chiew, S.S. Salim, “WEBUSE: Website usability evaluation [63] K. Milis, P. Wessa, S. Poelmans, C. Doom, and E. Bloemen, “The
tools,” Malaysian Journal of Computer Science, vol.16, no.1, 2003, Impact Of Gender On The Acceptance Of Virtual Learning
pp.47-57. Environments,” KU Leuven Association, Belgian, 2008.
[43] M. McGee, “Usability magnitude estimation,” in Proceedings of [64] T. Chandrasekera, “Using Augmented Reality Prototypes in Design
the Human Factors and Ergonomics Society 47th Annual Meeting, Education,” Design and Technology Education: an international
HFES, Santa Monica, CA, 2003, pp. 691-695. Journal, vol.19, no.3, 2014, pp.33-42.
[44] Y.S. Ryu, and T.L. Smith-Jackson, “Usability Questionnaire Items [65] J. Liaskos, J. Mantas, “Measuring the user Acceptance of a Web-
for Mobile Products and Content Validity,” in Proceedings of HCI Based Nursing Documentation System,” Methods Informatics
International, Las Vegas, 2005, pp. 22-27. Medical, vol. 45, no.1, 2006, pp.116-120
[45] D.P. Tedesco, and T.S Tullis, “A comparison of methods for [66] C. Debruyne and P.De Leenheer, “Using a Method and Tool for
eliciting post-task subjective ratings in usability testing,” Hybrid Ontology Engineering: an Evaluation in the Flemish
Proceedings of the Usability Professionals Association Conference, Research Information Space,” Journal of Theoretical and Applied
Broomfield, CO, 2006. Electronic Commerce Research, vol.9.no.2, 2014, pp.48-63.
[46] S. Elling, L. Lentz, and M. Jong, “Website Evaluation [67] E. Van Veenendaal, “Questionnaire based usability testing,” in
Questionnaire: Development of a Research-Based Tool for Conference Proceedings European Software Quality Week,
Evaluating Informational Websites,” in M.A. Wimmer, H.J. Scholl Brussels, November 1998.
and A. Grönlund (Eds.) Lecture Notes in Computer Science, [68] Z.Mansor, Z.M.Kasirun, S.Yahya, N.H.Arshad, “The Evaluation of
vol.4656 ,Springer-Verlag, Berlin Heidelberg, 2007,pp. 293-304. WebCost Using Software Usability Measurement Inventory
[47] J.Sauro, and J.S. Dumas, “Comparison of three one-question, post- (SUMI),” International Journal of Digital Information and Wireless
task usability questionnaires,” in Proceedings of CHI 2009, ACM, Communications, vol.2, no.2, 2012, pp.197-201.
Boston. 2009, pp. 1599-1608. [69] G. McArdle, “Exploring the Use of 3D Collaborative Interfaces for
[48] K. Finstad, “The usability metric for user experience,” Interacting E-Learning,” Horia-Nicolai Teodorescu, Junzo Watada, and
with Computers, vol.22, no.5, 2010, pp.323–327. Lakhmi C. Jain (Eds.): Intelligent Systems and Technologies,
[49] J. Sauro, “The Standardized Universal Percentile Rank Spring-Verlag Berlin Heidelberg, 2009, pp.249-270.
Questionnaire (SUPR-Q),” 2011, Available at www.suprq.com/. [70] N.M. Rusli, S. Hassan, and N.E.Liau, “Usability Analysis of
[50] O. Erdinç, and J.R. Lewis, “Psychometric Evaluation of the T- Students Information System in a Public University,” Journal of
CSUQ : The Turkish Version of the Computer System Usability Emerging Trends in Engineering and Applied Sciences, vol.4, no.6,
Questionnaire” International Journal of Human and Computer 2013, pp.806-810
Interaction, vol.29, no.5, 2013,pp. 319-326. [71] R.De Asmundis, “An evaluation model to measure impact and
[51] J.R. Lewis, B.S. Utesch, and D.E. Maher, D.E, “UMUX-LITE – usability of a serious game,” Master’s thesis, University of Bari
When There’s No Time for the SUS,” Proceedings of the SIGCHI Aldo Moro, 2014.
Conference on Human Factors in Computing Systems, 2013, pp. [72] M.E.C. Santos, J.Polvi, T. Taketomi, G. Yamamoto, C.Sandor, and
2099-2102. H. Kato, “Toward Standard Usability Questionnaires for Handheld
[52] M.D.Polkosky, “Machines as mediators: The challenge of Augmented Reality,” IEEE Computer Graphics and Applications,
technology for interpersonal communication theory and research,” vol.35, no.5, 2015, pp.50-59.
in E.Kojin (Ed.):Mediated interpersonal communication,New [73] A.H. Zins, U.Bauernfeind, F.D.Missier, N.Mitsche, F.Ricci,
York: Routledge, 2008, pp.34-57. H.Rumetshofer, and E. Schaumlechner, “Prototype Testing for a
[53] J.R. Lewis, and M.L.Hardzinski, “Investigating the psychometric Destination Recommender System: Steps, Procedures and
properties of the Speech User Interface Service Quality Implications,” in Proceedings of Enter 2004, Cairo, Springer
questionnaire” International Journal of Speech Technology, vol. Verlag, 2004, pp. 249-258.
18, no.3,2015, pp.479-487. [74] A.Kiselev and A.Loutfi, “Using a Mental Workload Index as a
[54] R. Likert, “A Technique for the Measurement of Attitudes,” Measure of Usability of a User Interface for Social Robotic
Archives of Psychology, vol.140, 1932, pp.1–55. Telepresence,” in Proceedings of the Ro-Man Workshop on Social
Robotic Telepresence, 2012, pp. 3-6.
[55] J. G.Snider, and C.E. Osgood. “Semantic Differential Technique,”
A Sourcebook. Chicago: Aldine, 1969. [75] S.Borsci, S.Federici, M.Gnaldi, S.Bacci, and F.Bartolucci,
“Assessing user Satisfaction in the era of User Experience:
[56] J.C Nunnally, “Psychometric theory,” (3rd Revied edition), New
Comparaison of the SUS, UMUX, and UMUX-LITE as a function
York: McGraw-Hill, 1993, ISBN: 978-0070478497.
of Product Experience,” International Journal of Human-Computer
[57] M. Hassenzahl, M. Burmester, and F. Koller, “AttrakDiff: Interaction, vol.31, no.8,2015, pp.484-495.
questionnaire to measure perceived hedonic and pragmatic
[76] T.S. Tullis, and J.N.Stetson, “A comparison of questionnaires for
quality,”in: J.Ziegler and G. Szwillus (Hrsg.), Mensch and
assessing website usability,” Proceedings of the Usability
Computer Interaktion, Bewegung, 2003, pp. 187-196.
Professionals Association Conference, 1-12, Minneapolis,Minn,
[58] K.Väänänen-Vainio-Mattila, and K.Segerståhl, “A Tool for USA, June 2004.
Evaluating Service User eXperience (ServUX): Development of a [77] J.Benedek, and T. Miner, “Measuring desirability: New methods
Modular Questionnaire,” in User Experience Evaluation Methods for evaluating desirability in a usability lab setting,” Usability
in Product Development (UXEM'09),Workshop in Interact'09 Professionals Association Conference, Orlando, July 2002.
conference, Uppsala, Sweden, 2009.
[78] ISO/WD 9241-112, “Ergonomics of human-system interaction —
[59] N. McNamara, J. Kirakowski, “Measuring user-satisfaction with — Part 112:Principles for the presentation of information,”2013.
electronic consumer products: The consumer products
questionnaire,” International Journal of Human-Computer Studies, [79] ISO 9241-110, “Dialogue principles: Ergonomics of human-system
vol. 69, no.6, 2011, pp.375 -386. interaction -- Part 110,”2006.
[60] J.R. Lewis, and D.K.Mayes,“Development and psychometric [80] J.M.C. Bastien, and L. Scapin, “Evaluation des systèmes
evaluation of the Emotional Metric Outcomes (EMO) d'information et critères ergonomiques,” in C.Kolski(Ed.),
questionnaire,” International Journal of Human-Computer Environnement évolués et évaluation de l’IHM, vol.2, Paris:
Interaction,vol.30, 2014,pp. 685-702. Hermès, 2001, pp. 53-80.
[61] H. S. Naeini and S. Mostowfi, “Using QUIS as Measurement Tool [81] N. Kerzazi, and M. Lavallée, “Inquiry on usability of two software
for User Satisfaction Evaluation (Case study: Vending Machine),” process modeling systems using ISO/IEC 9241,” in Electrical and
International Journal of Information Science, vol. 5, no.1, 2015, pp. Computer Engineering (CCECE), 2011, pp.773–776.
14-23.
[62] G.K.Akilli, “User satisfaction evaluation of an educational
website,” The Turkish Online Journal of Educational Technology,
vol.4, no.1, 2005, pp.85-92.
A. Assila, K. de Oliveira and H. Ezzedine, Standardized Usability Questionnaires: Features and Quality Focus
31