0% found this document useful (0 votes)
294 views3 pages

Design Evaluation in Hci

Design Evaluation in HCI assesses user interfaces to ensure they meet user needs and usability standards. It includes formative and summative evaluation methods, analytical and empirical techniques, and common criteria such as effectiveness and satisfaction. Various techniques like heuristic evaluation, cognitive walkthroughs, and A/B testing are employed to identify usability issues and gather user feedback.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
294 views3 pages

Design Evaluation in Hci

Design Evaluation in HCI assesses user interfaces to ensure they meet user needs and usability standards. It includes formative and summative evaluation methods, analytical and empirical techniques, and common criteria such as effectiveness and satisfaction. Various techniques like heuristic evaluation, cognitive walkthroughs, and A/B testing are employed to identify usability issues and gather user feedback.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

DESIGN EVALUATION IN HCI

Design Evaluation in Human-Computer Interaction (HCI) is the process of


assessing a user interface or interactive system to determine how well it meets user
needs, supports usability, and aligns with design goals. It is a critical phase in the
user-centered design process.

Key Objectives:

 Assess usability (effectiveness, efficiency, satisfaction)


 Identify design problems or usability issues
 Gather user feedback to improve the product
 Validate design decisions before full development or release

Types of Design Evaluation Methods:

1. Formative Evaluation – Done during the design process to improve the


design.
o Examples: Heuristic Evaluation, Cognitive Walkthrough, Think-Aloud
Protocol
2. Summative Evaluation – Conducted after a prototype or final product is built
to assess its overall usability.
o Examples: Usability Testing, A/B Testing, Surveys
3. Analytical Methods – No users involved.
o Examples: Heuristic Evaluation, GOMS Model
4. Empirical Methods – Involve real users performing tasks.
o Examples: Observations, User Testing, Field Studies

Common Evaluation Criteria:

 Effectiveness: Can users complete tasks successfully?


 Efficiency: How quickly can users perform tasks?
 Satisfaction: How pleasant is the experience?
 Learnability: How easy is it to learn?
 Errors: How many errors occur and how severe are they?
Different Design Evaluation Techniques:

🔍 1. Heuristic Evaluation

 What it is: Experts review the interface using established usability principles
(e.g., Nielsen's 10 heuristics).
 When to use: Early in the design process; good for catching obvious usability
issues before involving users.
 Pros: Quick, cost-effective, doesn't require users.
 Cons: Depends on evaluator expertise; may miss user-specific issues.

🧠 2. Cognitive Walkthrough

 What it is: Experts simulate a user's thought process while stepping through
tasks.
 When to use: During early prototyping stages, especially when evaluating
ease of learning.
 Pros: Focuses on user learning; highlights confusing steps.
 Cons: Time-consuming; best for specific tasks rather than entire system

👀 3. Think-Aloud Protocol

 What it is: Users verbalize their thoughts while using the interface.
 When to use: During usability testing to understand users' mental models and
frustrations.
 Pros: Reveals hidden usability issues and decision-making.
 Cons: May influence how users behave; requires analysis of qualitative data.

5. Surveys & Questionnaires

 What it is: Users answer standardized or custom questions post-interaction


(e.g., SUS, NASA-TLX).
 When to use: After users have tried the system; often used in summative
evaluation.
 Pros: Easy to administer; collects quantitative and qualitative feedback.
 Cons: Relies on self-reported data; may not reflect real behavior.
🧪 6. A/B Testing

 What it is: Comparing two versions (A and B) to see which performs better.
 When to use: When testing variations in a live environment (e.g., website
layout or feature).
 Pros: Data-driven; useful for optimizing design.
 Cons: Requires large user base; limited to small changes.

🌍 7. Field Studies / Ethnographic Studies

 What it is: Observing users in their real environment.


 When to use: To understand context of use, workflows, and user behavior.
 Pros: Reveals real-world usage patterns and needs.
 Cons: Time-consuming and harder to control.

You might also like