International Journal of Engineering Science Invention
ISSN (Online): 2319 – 6734, ISSN (Print): 2319 – 6726
www.ijesi.org Volume 3 Issue 5ǁ May 2014 ǁ PP.18-26
www.ijesi.org 18 | Page
A Literature Review of Designing, Modality and Psychological
perspective in Human-Computer Interaction
1,
Rashmi Bakshi , 2,
Sachin Gupta2
1
Assistant Professor, Vivekananda Institute of Professional Studies, Department of Information and Technology,
Delhi, India
2
Assistant Professor, Vivekananda Institute of Professional Studies, Department of information and
Technology, Delhi, India
ABSTRACT : Human computer interaction is a wide field with one common objective that is to build efficient
interfaces for its intended users.
PURPOSE: The purpose of this paper is to explore the field of HCI by providing an extensive literature review
of sub areas- Design of the interface, Context aware systems and Psychology.
METHODOLOGY: The review is based upon summarising different visualising techniques used in order to
develop user friendly systems. The categorization gives the new insight to interfaces in HCI and allows
comparison between different methodology implemented and problems faced during the findings of user’s
needs. Psychological models that help in understanding user’s pattern and influence the design in HCI are also
classified.
ORIGINALITY: The present study emphasises the design techniques along with its psychological perspective.
KEYWORDS :Human Computer Interaction (HCI), Tangible-Graphical User Interface(T-GUI), Multimodal,
Context Aware systems, Pervasive computing, Ubiquitous, Ambient Systems, Psychology
I. INTRODUCTION
HCI is a vast area of research accommodating multiple fields. It is a mix of Computer Science
(technical skills), Psychology and Cognitive Science (how human mind works), Business (E-commerce),
Philosophy and Aesthetics (whether system/software follows the principles of design). It involves in depth
analysis of programmers who develop the system and actual users who use the system.HCI is complex as it
involves the process of predicting user‟s needs by giving explicit assumption about the user. User modelling is
performed where user‟s pattern, behaviour and other miscellaneous information is utilised to build user friendly
systems. Einstein „s” If I can’t picture it, I can’t understand it” holds true where visualising design is important
in HCI. Lot of research is required in order to build user centered design systems. User Centered Design Process
mainly consists of low fidelity and high fidelity prototypes which are essential in order to give a structure to
creative ideas.Low Fidelity Prototype involves paper prototypes or Mock ups to conduct early usability testing.
Exploratory study is conducted which involves preliminary surveys to gather user‟s requirements. Personas,
sketches, story boarding are built for effective knowledge about the intended user. Design principles are revised.
User‟s constraints and limitations are also realised. User‟s feedback is recorded and post evaluation questions
are asked after testing low fidelity prototype. High Fidelity Prototype involves the detailed design to gather
user‟s experience by interactive simulation. It helps to view the user‟s requirements in detail. It also involves in
building design alternatives. Quantitative evaluation is executed where a hypothesis is assumed about the
interface or any of the features involved in it and tested vigour sly.
There are mainly three reasons for conducting research in HCI.
[1] Improving existing systems- as existing systems do not any longer fulfil the needs of the user. Unsatisfied
users are the cause for further improvement. System is also improved in terms of scalability and expansion.
[2] Developing new systems- to cater different variety of users. Same design may not be able to serve different
users. Adult‟s computer interaction styles are not necessarily appropriate interaction styles for children.
[3] Developing guidelines/documentation about design principles which is used as a reference for development
of similar systems in future. Guidelines are related to sensation, memory to take decisions over graphical
layouts, colour combinations, animation styles, Susan et al. (2001).
A Literature Review Of Designing, Modality...
www.ijesi.org 19 | Page
II. EVOLUTIONS OF DIFFERENT INTERFACES IN HCI
GUI (Graphical User Interfaces) has commercially existed since 1981 which became the standard
paradigm for HCI, Ishii (2006). They represented the information graphically that could be manipulated with
just a “drag and drop” or “point and click” interaction .They were certainly better than command user interface
where user had to type and memorise the commands for data processing.TUI (Tangible User Interfaces) are an
alternative to GUI that give physical forms to digital information where users can directly manipulate/modify
the data using hap tic interaction skills(vision, touch and feel),Ishii(2006).T/GUI (Tangible/Graphical User
Interfaces) are those interfaces that mix real world and virtual world so as to get best out of both worlds. Such
interfaces are used to design mixed reality systems. They are also referred as embedded systems. Do It Yourself,
sensor wearable for preventing injury at workplace, Leung et al. (2012).There are 5 human senses. Sight, touch,
hearing, smell and taste. These senses are used as medium of interaction for input/output operations between
humans and machines. Based on human senses, Interfaces are sub divided into:
Uni-modal interfaces: are those systems that utilise only one human sense for communication such as vision is
used to view data via camera; sense of hearing is used to hear data via microphone.
Multimodal interfaces: are those systems where users utilise more than one human sense to provide input data to
the machine like Audio-visual fusion to input data for recognising speech. Multimodal techniques can be used to
create different types of interfaces like
Perceptual interfaces: are highly interactive, rich, natural and efficient with computers. They sense input, render
output and are not feasible with standard I/O devices.
Attentive Interfaces: are context aware that use gathered information to estimate best time and approach to
communicate with the user.
Inactive interfaces: help users to gain knowledge for the specific tasks, they are engaged in. Tasks involve an act
of doing such as driving a car, Sebe (2009).
AMBIGUITY RELATED TO MULTIMODAL INTERFACES
There is high level of ambiguity involved in the term Multimodal because of its heavy usage in many
contexts and across various disciplines. To elaborate, Combination of Keyboard and Mouse to input data is
considered as Multi modal. Usage of only keyboard cannot be considered as Multimodal even though user might
“view” the keys while typing or “Read” the sentence while typing or locate the keys before pressing. Clear
distinction must be made of what user does and what system is actually receiving as an input during an
interaction, Sebe (2009). Similarly using more than 1 camera to track object movement is not Multimodal
approach. However, in this paper it is proposed that using same or different modality (such as camera for vision)
to track different visualisation techniques (like object movement and gesture recognition) is considered as
multimodal. This proposal is opposite to that of Sebe (2009) where system is considered to be multimodal only
if it combines different modalities. Since most of the researchers consider different visualisation techniques
integrated together as Multi modal, this paper accepts different visualisation techniques using the same modality
as multimodal. It is helpful in easy and clear re evaluation of the techniques used. Multiple modalities used in a
system cancel each other‟s errors and reduce the need for error detection and correction, Oviat (1999).
A Literature Review Of Designing, Modality...
www.ijesi.org 20 | Page
Interface-Modality Authors Visualisation
technique
Methodology
GUI-Unimodal Reilly et al.(2007) Cognitive
mapping
Digital maps were morphed together to test the
impact on viewers at research lab and informal
places.
GUI-Multimodal Kellar et al.(2005) Audio
Video
Field studies were conducted including 24 pair of
participants using mobile phones and PDA s to
analyse the practical considerations.
Kaplan,Yankelovich(20
11)
Audio, Video
3D visualisation,
A tool kit is built to help users interact with 3D
virtual world which involves avatars, audio
authentication, client rendering, networks and
chat servers.
Busso et al.(2004) Facial
expressions.
(Video)
Speech (Audio)
Audio (speech) data and video data (facial
expressions by the use of markers) were extracted
from the database of an actress expressing four
emotions- sadness, anger, happiness and neutral
stage.
Speech and facial features were integrated at
decision level to analyse the emotion.
Jil,Yang(2002)
James,Sebe(2007)
Image
acquisition,
Pupil tracking,
Eyelid
movement,
Face pose
estimation,
Facial
expression
recognition
Near infra red illuminator minimises the impact
of different lightening. It produces bright pupil
effect. And dark pupil effect using CCD camera.
Pupil tracking is done via Kalman filtering.
Eyelid movement reflects a person‟s fatigue, eye
closure duration, eye blink frequency.
T/GUI-Unimodal Leung et al.(2012) Gesture
Recognition
Commercialised stretch sensors were used to
track the wrist movement and alert users about
their body postures graphically.
Wul et al. (2011) Vision Tangible Camera is used to pair with virtual
objects in order to track object movement.
Saponas,Harrison,
Benko(2011)
Stroke
recognition
A Sensor is attached at the back of the mobile
phone that senses finger strokes through fabrics.
Reilly et al.(2006) Location
detection
RFID tags were placed at the back of each
location on the paper map. RFID reader was
placed at the back of PDA.
T/GUI-multimodal Reilly et al.(2010) Vision, Light,
Touch
3 concrete physical digital designs are built.
inSpace table, inSpace wall, spin Space to
communicate
Delamare et al.(2012) Vision, Light,
gesture
recognition
Ray casting metaphor was implemented where
object is out of reach yet in line of sight. Volume
selection helped to solve the problem of accuracy
Starner et al.(2000) Vision, Light,
Gesture
recognition
Pendant consists of camera that recognises user‟s
gestures and thereby giving him/her the control
over appliances.
Harrison et al.(2011) Vision,
Multi Finger
Tracking
A wearable system is built by tracking multi
touch finger movement involving finger click
detection by classifying different surfaces. It also
involves depth driven object recognition.
Geurts et al.(2011) Gesture
recognition
4 mini games were built for patients lacking
motor control. Sensor was used to track the
arm/head movement.
A Literature Review Of Designing, Modality...
www.ijesi.org 21 | Page
Table 2.1 pairs interface-modality and summarises the work done by past researchers in terms of design. This
kind of classification has not been done before.It lessens the confusion and makes a clear distinction between
kind of modality applied for particular interface type.It also lists different visualisation techniques focused and
methodology implemented. The table clearly indicates the popularity of multimodality with graphical interfaces
since most of the work is done in this domain.
Figure 2.1 Classification of different interfaces based on modality listing different visualisation techniques used.
Authors Main Objective Problems faced
Beyer,Holtzb
latt(1993)
To include user‟s point of view in designing
products by collecting data using ethnographic
techniques of observing and questioning customers
while they work. Usability tests were conducted for
the same.
User surveys are not always accurate.
Most information is unconscious and
tacit.
Contextual data cannot be used to
show trend. Different people observe
differently. Awareness and
willingness to adapt to the change
effects the timeline of the project.
Landauer(19
88)
Goals of conducting research in HCI are comparison
of existing systems, invention/design of new
systems, discovering /testing relevant scientific
principles and establishing guidelines and standards
to meet user‟s requirements.
Unreliability of intuition and
variability of human behaviour.
One who has used the system
repeatedly will be having a biased
outlook.
Features evaluated in isolation may
not give accurate results.
Too many variables, design problems,
parameters, different kind of users and
tasks complicate the process of
evaluation.
Tang et al.
(2012)
Empirical study is conducted to find out how FPS
players overcome coordination problems in shared
voice channel by conducting online surveys and
competitive tournaments
FPS games challenge team
coordination. It is difficult to locate
the teammate, find out what they are
looking at and how do they interpret.
It is difficult to maintain awareness of
their environment and develop codes
for meaningful communication
Addlesee et
al.(2001)
It discovers “sentinent computing” where application
understands the perception of the user.
Sentinent interfaces are expensive to
build. It has not yet achieved
commercial worth.
A Literature Review Of Designing, Modality...
www.ijesi.org 22 | Page
Czerwinski
&
Horvitz(200
2)
A study was conducted to investigate memory for
daily computing events. Video clips of participants
were collected for the same. Participants were asked
to recall the events.
According to the user‟s feedback,
navigation controls could have been
better along with general affordance
of the prototype. Automated system
should be able to identify all the
events.
Salamin et
al.(2010)
Multimodal technique is developed to perform
ubiquitous computing by building context aware
systems and ontology based on semantics for users
with special needs.
System is divided into 3 parts
Context(activity user is engaged in ), Content(info
user wants to seek/ input),Rendering
application(output provided )
Limited choice of inputs. Set up was
GUI, making it less efficient for blind
users. More scenarios and user
profiles should have been tested to
prove the worth of the system
Lavie,Meyer
(2010)
Evaluates the effect of adaptive user interface.
Adaptive levels vary from manual to fully adaptive.
Cognitive and Physical tasks were accounted
including
Routine and non routine situations tested with
different user age groups.
AUI is useful as long as situations are
known. It cannot adjust itself to the
unknown situations.
Dynamic environmental factors
proved to be constraints for AUI
Iqbal,Hortvit
z(2007)
Field study of the multitasking behaviour of
computer users focused on the suspension and
resumption
of tasks was conducted.
Tasks were email alerts, incoming instant messages.
Users view alerts as an awareness
mechanism rather than a
trigger to switch tasks
Immediate responses indicate alert-
driven interruptions and
Delayed responses indicate self-
initiated interruptions.
Users spend more time than they
realize responding to
alerts
Iqbal et
al.(2010)
Identifying better and worse times of conversation
while driving by examining interference of cognitive
load.
Attending phone calls while driving
have catastrophic effects. Drivers
have slower braking reaction time,
have impaired steering control, and
more likely to have an accident.
Contradicting the above, some drivers
sub consciously increased their
awareness and became alert while
talking on phone thereby increasing
performance.
Table 2.2 lists contribution of researchers in developing context aware systems. Efforts have been made to
understand user‟s needs by observing the surrounding, collecting feedbacks and running various experiments. It
is a way of collecting tangible information that can improve the performance of the system. It also lists the
problems faced during the experiments such as unskilled and unreliable users, users with different potential,
unfavourable environment and poor design. This table gives an insight of running smooth experiments with
people by avoiding mistakes already listed. Sometimes context aware systems also understand user‟s perception
and their learning styles thus it is interrelated to psychology which is discussed in the next section.
III. HCI AND PSYCHOLOGY
Psychology is an academic and applied discipline that involves scientific study of mental functions and
behaviours. It is a vast domain that includes different approaches to understand human behaviour such as
Cognitive Psychology, Developmental Psychology, Social Psychology and Educational Psychology. Psychology
is directly related to HCI where HCI is a science of design, seeking to understand and support human being
interacting with the technology. Development of GUI was influenced by psychological research (Johnson 1989).
As applications move from desktop to mobile, wider set of users, immense environments, it became difficult for
A Literature Review Of Designing, Modality...
www.ijesi.org 23 | Page
people to understand different aspects of the digital world. It also became difficult for designers to
satisfy people in terms of usability, Olson et al. (2003). In this study, review of certain psychological theories
are conducted that influence the design of the interface discussed in the above section, in HCI.Cognitive
psychology is the study of how people think and learn. It helped HCI to develop models that explain and predict
human performance. The goal of cognitive psychology is to understand the psychological processes involved in
the acquisition and use of knowledge by people. This includes domains such as perception, attention, memory,
learning, thinking, and the importance of social and environmental influences on those domains, Giacoppo
(2001).Developmental Psychology seeks to understand how people come to perceive, understand, and act within
the world and how these processes change as they age. It is necessary in HCI to build efficient applications like
building an online learning game for 10 years old boy requires designers to study about various development
stages the child goes through in previous years in order to build something useful for him.Social Psychology is
the scientific study of how people's thoughts, feelings, and behaviours are influenced by the actual, imagined, or
implied presence of others. Social networking websites explored this area of psychology in order to add
impressive features in their interface. They understood the requirements of their customers in terms of satisfying
their ego and gaining appreciation from their peer group by increased number of ‟ Face book likes‟ for their
uploaded content.
Educational Psychology is the psychology of teaching. As interfaces became multi modal, educating
users became important. Collaborative virtual environment exists where people are represented as Avatars-
simple digital representation of people who move in 3D space. Problem faced during this virtual interaction is
lack of mutual awareness among Avatars. Designers also need to be taught to revise and follow the basic design
rule”Keep it Simple” while designing multi modal interfaces Design theories- are derived from Psychology.
They are more explanatory and provide guidelines for the design of the interface. Designing interfaces requires
decision about which modality to use and how to mix different modality which further requires an
understanding of brain anatomy. Wickens (2008) build 4D multiple resource model and mental overload where
he discussed how resources can be shared in finite time by different tasks and how limited mental resource can
degrade the performance if “demanded” in exceeding capacity. He also categorised tasks as Primary and
Secondary. His study is important in order to take design decisions like use keyboards or voice, Symbols or text
.Multiple resource theory states that multiple tasks can be done efficiently by human if they are using separate
cognitive resource(short term, long term memory, attention, and reasoning),Wickens (2002) Iqbal et al. (2010)
conducted a controlled study with 18 participants who drove within an interactive driving simulator. Drivers
drove at paths having difficult navigation challenges. They had to attend phone call while driving. Their
performance was better on simple routes. Three factors were explored in the study. Driving complexity (sudden
brakes, missed turns, collision), call types (assimilate, retrieve, generate) and focus (mobile, driving,
both).Result shows that simple routes are safest for answering phone calls. Cognitive resource demand was
higher than its availability while driving at complicated routes and answering questions from memory thus
decreasing the performance. On the contrary some drivers sub consciously increased their awareness and
became alert while talking on the phone thus increasing the performance. Hence deeper understanding of
cognition is needed during multi tasking, Iqbal et al. (2010). Iqbal,Bailey (2005) has shown that interruptions
during periods of higher mental workload cause users to take longer to resume their suspended tasks and have
larger negative effect. Mark et al. (2005) understood the influence of interruptions on task switching and found
that users frequently switch between tasks and 57% of their activities are interrupted.
A Literature Review Of Designing, Modality...
www.ijesi.org 24 | Page
Figure 3: Models developed in Psychology in context of HCI
Author Objective Limitations
Lindsay &
Norman (1977)
Humans were characterised as information
processors.
The model explains the movement of information
from input to output within a human, via a series
of processing stages. The stages of processing are
encoding, comparison, response selection, and
response execution.
User‟s perception, behaviour, learning
technique was not accounted.
Barber (1988) An expansion of Lindsay and Norman‟s model
which includes Attention and memory processes
to interact with series of processing stages. It
includes how information is perceived, attended,
processed and stored in the memory.
Attention and Memory are used in
generalised form
Atkinson and
Shiffrin (1968)
Multi store model of Memory was developed
where Memory is sub categorised into 3 types
Sensory Memory: lasts few seconds, holds
limited amount of information which gets lost if
not attended.
Short term memory: temporary-lasts about 20
seconds. Limited storage capacity.
Long term memory: Permanent, infinite, can last
lifetime.
Hacker(1973,
1978,1985,1986)
Hacker‟s action theory explains the determinants,
processes and consequences of work behaviour.
The main components of Action Theory are acts,
actions, and operations.
 Acts: motivated and regulated by
intentions (i.e. higher order goals), and
realized through actions.
 Actions: The smallest units of cognitive
and sensory-motor processes that is
oriented towards conscious goals.
 Operations: Components of actions that
have no independent goals
Users constantly change, alter and
vary the actions to achieve the goal.
Thus regulation of selection of actions
is necessary in order to predict user‟s
behaviour.
It is still far from being a prescriptive
theory, which will guide software
designers through use of different
components.
A Literature Review Of Designing, Modality...
www.ijesi.org 25 | Page
Norman(1988) It provides a list of the stages that users go
through in trying to use a system
 Forming the goal
 Forming the intention
 Specifying the action
 Executing the action
 Perceiving the system state
 Interpreting the system state
 Evaluating the outcome
DESIGN
THEORIES
Wickens(2008) 4D multiple resource model was developed for
multiple resource theory. It can be used as design
tool and also to predict multitask work overload.
D1. Stage of processing(cognition, perception,
response)
D2. Codes of processing(Spatial activity,
Linguistic activity)
D3.Modalities(Auditory, Visual)
D4. Visual channels(Focal, Ambient)
Tactile input-other level to modality
should have been added.
Outside unwanted interruption that
can affect time sharing among dual
tasks are unidentified.
Inability to realise resource demand
Unidentified factors that drives the
allocation policy, There is a
difference in labs and real world.
Card et al.(1983) GOMS model stands for Goals, Operators,
Methods and Selection rules.
Goals represent user‟s goals.what does user want
to accomplish.
Operators are the actions that the software allows
the user to take.
Methods are well-learned sequences of subgoals
and operators that can accomplish a goal.
Selection rules are the personal rules that users
follow in deciding what method to use in a
particular circumstance
Pros of GOMS are that it can predict
the performance of a system before
developing the design.
No special skills are required.
It has proven to be efficient.
Cons of following GOMS model are
that it is time consuming, assumes
error free
Expert behaviour, routine tasks.
Requires significant time investment.
Ignores parallel processing, problem
solving.
Table 3 summarises different psychological theories developed in context of HCI. There are cognitive theories
(understanding human mind) and design theories (applied psychology to improve design) listed. This
classification helps in understanding the research going on in the field of psychology that has a significant
impact on systems designed in HCI.
IV. FUTURE WORKS
Psychology is still distant from HCI. It is an independent area of research. While working on this
paper, it has been realised that psychology is not widely applied in HCI. There is not enough of research on
psychology of designers in choosing different interface-modality pair. Due to time constraint, it was not possible
to study different aspects of applied psychology involved in multimodal interfaces, if there exists any. Rather
general representation of theoretical models was provided.
V. CONCLUSIONS
This research paper summarises different interfaces existing in HCI. It categorised them on the basis of
modality implemented. It lists down the work done so far in the design of different interfaces based on different
modalities. It also lists down the work done in order to understand user‟s requirements and context aware
systems. It lists the psychological theories and models developed in context of HCI.
REFRENCES
[1] Sebe,N.Multimodal interfaces: Challenges and perspectives.Journal of Ambient Intelligence and Smart Environments, 2009
[2] Lavie,T., Meyer,J. Benefits and costs of adaptive user interfaces. International Journal of Human computer Interaction,2010
[3] .Busso,C., Deng,Z.,Yildirim,S.,Bulut,M.,Lee,C.M., Kazemzadeh,A.,Lee,S.,
[4] Neumann,U.,& Narayanan,S.Analysis of Emotion Recognition using Facial
A Literature Review Of Designing, Modality...
www.ijesi.org 26 | Page
[5] Expressions, Speech and Multimodal Information.ICMI, 2004
[6] Giacoppo,A.S.The role of theory in HCI.Department of Psychology,Catholic University, 2001.
[7] Salamin,P.,Thalmann,D., &Vexo,F.Context aware, multimodal, and semantic rendering engine. EPFL,2010
[8] Geurts,L.,Abeele,V., Husson,J., Windey,F.,Van Over,M., Annema,J.H., &Desmet,S.
[9] Digital Games for Physical Therapy: Fulfilling the Need for Calibration and Adaptation.TEI,2011
[10] Kaplan,J.,Yankelovich,N.Open Wonderland: Extensible Virtual World Architecture.
[11] IEEE, 2011
[12] Ji,Q.,Yang,X.Real-Time Eye, Gaze, and Face Pose Tracking for Monitoring Driver Vigilance. Elsevier Science Ltd, 2002
[13] Starner,T., Auxier,J.,&Ashbrook,D.The Gesture Pendant: A Self-illuminating, Wearable, Infrared Computer Vision System for
Home Automation Control and Medical Monitoring.IEEE,2000
[14] Landauer,T.K.Research Methods in Human-Computer Interaction.Handbook of Human-Computer Interaction,1988
[15] .Oviatt,S. Mutual Disambiguation of Recognition Errors in a Multimodal Architecture. CHI,1999
[16] Jaimes,A.,Sebe,N.Multimodal human–computer interaction: A survey.Computer vision and image understanding,2007
[17] Harrison,C.,Benko,H.,&Wilson,A.D.OmniTouch: Wearable Multitouch Interaction Everywhere.UIST,2011
[18] Saponas,T.S., Harrison,C.,& Benko,H.PocketTouch: Through-Fabric Capacitive Touch Input.UIST,2011
[19] Delamare,W.,Coutrix,C.,& Nigay,L.Pointing in the Physical World for Light Source Selection.EICS,2013
[20] Addlesee,M.,Curwen,R.,Hodges,S.,Newman,J.,Steggles,P.,Ward, A.Implementing a sentinent computing system.IEEE,2001
[21] .Ishii,H.Tangible User Interfaces.CHI, 2006
[22] Reilly,D.F.,Rouzati,H.,Wu,A.,Hwang, J.Y.,Brudvik,J.,&Edwards,W.K.TwinSpace: an Infrastructure for Cross-Reality Team
Spaces .UIST,2010
[23] Reilly,D.,& Inkpen,K.White Rooms and Morphing don‟t mix: Setting and the Evaluation of Visualization Techniques,CHI,2007
[24] Leung,K., Reilly,D.,Hartman,K.,Stein,S.,&Westecott,E. Limber: DIY Wearables For Reducing Risk Of Office Injury.TEI,2012.
[25] Reilly,D., Rodgers,M., Argue,R.,Nunes,M., &Inkpen,K. Marked-up Maps: Combining Paper Maps and Electronic Information
Resources.Personal and Ubiquitous Computing,2006
[26] Kellar,M.,Reilly,D.,Hawkey,K.,Rodgers,M.,MacKay,B.,Dearman,D.,Ha,V., MacInnes,W.J.,Nunes,M., Parker,K.,Whalen,T.,&
Inkpen,K.M. It‟s a Jungle Out There: Practical Considerations for Evaluation in the City.CHI, 2005
[27] Wu1,A., Reilly,D., Tang, A.,& Mazalek,A. Tangible Navigation and Object Manipulation in Virtual Environments.TEI,2011
[28] Tang,A.,Massey,J.,Nelson, W.,Reilly,D., Edwards,W.K.Verbal Coordination in First Person Shooter Games. CSCW, 2012
[29] Dumais,S.,& Czerwinski,M. Building Bridges from Theory to Practice.One Microsoft Way,2001
[30] Wickens,C.D.Multiple Resources and Mental Workload. Golden Anniversary special Issue, June 2008
[31] Olson,G.M.,Olson,J.S.Human Computer Interaction:Psychological Aspects of the Human Use of Computing,2003
[32] Wickens,C.D. Multiple Resources and Performance Prediction ,Theoretical Issues in Ergonomic Science, 3 (2),159-177,2002.
[33] Iqbal,S.T.,Ju ,Y.C., & Horvitz,E. Cars, Calls, and Cognition: Investigating Driving and Divided Attention.CHI,2010
[34] Iqbal,S.T.,& Horvitz,E. Disruption and Recovery of Computing Tasks: Field Study, Analysis, and Directions.CHI,2007
[35] .Iqbal ,S.T.,& Bailey,B.P. Investigating the Effectiveness of Mental Workload as a Predictor of Opportune Moments for
Interruption.CHI 2005,1489-1492.
[36] Mark,G., Gonzalez,V.M.,& Harris,J.No task left behind?: examining the nature of fragmented work.CHI ,2005, 321-330.
[37] Czerwinski,M., &Horvitz,E. An Investigation of Memory for Daily Computing Events. Microsoft Research,One Microsoft
Way,2002
[38] 34 Card,S., Moran ,T.P.,& Newell,A.The Psychology of Human Computer Interaction. Lawrence Erlbaum Associates,1983

C0353018026

  • 1.
    International Journal ofEngineering Science Invention ISSN (Online): 2319 – 6734, ISSN (Print): 2319 – 6726 www.ijesi.org Volume 3 Issue 5ǁ May 2014 ǁ PP.18-26 www.ijesi.org 18 | Page A Literature Review of Designing, Modality and Psychological perspective in Human-Computer Interaction 1, Rashmi Bakshi , 2, Sachin Gupta2 1 Assistant Professor, Vivekananda Institute of Professional Studies, Department of Information and Technology, Delhi, India 2 Assistant Professor, Vivekananda Institute of Professional Studies, Department of information and Technology, Delhi, India ABSTRACT : Human computer interaction is a wide field with one common objective that is to build efficient interfaces for its intended users. PURPOSE: The purpose of this paper is to explore the field of HCI by providing an extensive literature review of sub areas- Design of the interface, Context aware systems and Psychology. METHODOLOGY: The review is based upon summarising different visualising techniques used in order to develop user friendly systems. The categorization gives the new insight to interfaces in HCI and allows comparison between different methodology implemented and problems faced during the findings of user’s needs. Psychological models that help in understanding user’s pattern and influence the design in HCI are also classified. ORIGINALITY: The present study emphasises the design techniques along with its psychological perspective. KEYWORDS :Human Computer Interaction (HCI), Tangible-Graphical User Interface(T-GUI), Multimodal, Context Aware systems, Pervasive computing, Ubiquitous, Ambient Systems, Psychology I. INTRODUCTION HCI is a vast area of research accommodating multiple fields. It is a mix of Computer Science (technical skills), Psychology and Cognitive Science (how human mind works), Business (E-commerce), Philosophy and Aesthetics (whether system/software follows the principles of design). It involves in depth analysis of programmers who develop the system and actual users who use the system.HCI is complex as it involves the process of predicting user‟s needs by giving explicit assumption about the user. User modelling is performed where user‟s pattern, behaviour and other miscellaneous information is utilised to build user friendly systems. Einstein „s” If I can’t picture it, I can’t understand it” holds true where visualising design is important in HCI. Lot of research is required in order to build user centered design systems. User Centered Design Process mainly consists of low fidelity and high fidelity prototypes which are essential in order to give a structure to creative ideas.Low Fidelity Prototype involves paper prototypes or Mock ups to conduct early usability testing. Exploratory study is conducted which involves preliminary surveys to gather user‟s requirements. Personas, sketches, story boarding are built for effective knowledge about the intended user. Design principles are revised. User‟s constraints and limitations are also realised. User‟s feedback is recorded and post evaluation questions are asked after testing low fidelity prototype. High Fidelity Prototype involves the detailed design to gather user‟s experience by interactive simulation. It helps to view the user‟s requirements in detail. It also involves in building design alternatives. Quantitative evaluation is executed where a hypothesis is assumed about the interface or any of the features involved in it and tested vigour sly. There are mainly three reasons for conducting research in HCI. [1] Improving existing systems- as existing systems do not any longer fulfil the needs of the user. Unsatisfied users are the cause for further improvement. System is also improved in terms of scalability and expansion. [2] Developing new systems- to cater different variety of users. Same design may not be able to serve different users. Adult‟s computer interaction styles are not necessarily appropriate interaction styles for children. [3] Developing guidelines/documentation about design principles which is used as a reference for development of similar systems in future. Guidelines are related to sensation, memory to take decisions over graphical layouts, colour combinations, animation styles, Susan et al. (2001).
  • 2.
    A Literature ReviewOf Designing, Modality... www.ijesi.org 19 | Page II. EVOLUTIONS OF DIFFERENT INTERFACES IN HCI GUI (Graphical User Interfaces) has commercially existed since 1981 which became the standard paradigm for HCI, Ishii (2006). They represented the information graphically that could be manipulated with just a “drag and drop” or “point and click” interaction .They were certainly better than command user interface where user had to type and memorise the commands for data processing.TUI (Tangible User Interfaces) are an alternative to GUI that give physical forms to digital information where users can directly manipulate/modify the data using hap tic interaction skills(vision, touch and feel),Ishii(2006).T/GUI (Tangible/Graphical User Interfaces) are those interfaces that mix real world and virtual world so as to get best out of both worlds. Such interfaces are used to design mixed reality systems. They are also referred as embedded systems. Do It Yourself, sensor wearable for preventing injury at workplace, Leung et al. (2012).There are 5 human senses. Sight, touch, hearing, smell and taste. These senses are used as medium of interaction for input/output operations between humans and machines. Based on human senses, Interfaces are sub divided into: Uni-modal interfaces: are those systems that utilise only one human sense for communication such as vision is used to view data via camera; sense of hearing is used to hear data via microphone. Multimodal interfaces: are those systems where users utilise more than one human sense to provide input data to the machine like Audio-visual fusion to input data for recognising speech. Multimodal techniques can be used to create different types of interfaces like Perceptual interfaces: are highly interactive, rich, natural and efficient with computers. They sense input, render output and are not feasible with standard I/O devices. Attentive Interfaces: are context aware that use gathered information to estimate best time and approach to communicate with the user. Inactive interfaces: help users to gain knowledge for the specific tasks, they are engaged in. Tasks involve an act of doing such as driving a car, Sebe (2009). AMBIGUITY RELATED TO MULTIMODAL INTERFACES There is high level of ambiguity involved in the term Multimodal because of its heavy usage in many contexts and across various disciplines. To elaborate, Combination of Keyboard and Mouse to input data is considered as Multi modal. Usage of only keyboard cannot be considered as Multimodal even though user might “view” the keys while typing or “Read” the sentence while typing or locate the keys before pressing. Clear distinction must be made of what user does and what system is actually receiving as an input during an interaction, Sebe (2009). Similarly using more than 1 camera to track object movement is not Multimodal approach. However, in this paper it is proposed that using same or different modality (such as camera for vision) to track different visualisation techniques (like object movement and gesture recognition) is considered as multimodal. This proposal is opposite to that of Sebe (2009) where system is considered to be multimodal only if it combines different modalities. Since most of the researchers consider different visualisation techniques integrated together as Multi modal, this paper accepts different visualisation techniques using the same modality as multimodal. It is helpful in easy and clear re evaluation of the techniques used. Multiple modalities used in a system cancel each other‟s errors and reduce the need for error detection and correction, Oviat (1999).
  • 3.
    A Literature ReviewOf Designing, Modality... www.ijesi.org 20 | Page Interface-Modality Authors Visualisation technique Methodology GUI-Unimodal Reilly et al.(2007) Cognitive mapping Digital maps were morphed together to test the impact on viewers at research lab and informal places. GUI-Multimodal Kellar et al.(2005) Audio Video Field studies were conducted including 24 pair of participants using mobile phones and PDA s to analyse the practical considerations. Kaplan,Yankelovich(20 11) Audio, Video 3D visualisation, A tool kit is built to help users interact with 3D virtual world which involves avatars, audio authentication, client rendering, networks and chat servers. Busso et al.(2004) Facial expressions. (Video) Speech (Audio) Audio (speech) data and video data (facial expressions by the use of markers) were extracted from the database of an actress expressing four emotions- sadness, anger, happiness and neutral stage. Speech and facial features were integrated at decision level to analyse the emotion. Jil,Yang(2002) James,Sebe(2007) Image acquisition, Pupil tracking, Eyelid movement, Face pose estimation, Facial expression recognition Near infra red illuminator minimises the impact of different lightening. It produces bright pupil effect. And dark pupil effect using CCD camera. Pupil tracking is done via Kalman filtering. Eyelid movement reflects a person‟s fatigue, eye closure duration, eye blink frequency. T/GUI-Unimodal Leung et al.(2012) Gesture Recognition Commercialised stretch sensors were used to track the wrist movement and alert users about their body postures graphically. Wul et al. (2011) Vision Tangible Camera is used to pair with virtual objects in order to track object movement. Saponas,Harrison, Benko(2011) Stroke recognition A Sensor is attached at the back of the mobile phone that senses finger strokes through fabrics. Reilly et al.(2006) Location detection RFID tags were placed at the back of each location on the paper map. RFID reader was placed at the back of PDA. T/GUI-multimodal Reilly et al.(2010) Vision, Light, Touch 3 concrete physical digital designs are built. inSpace table, inSpace wall, spin Space to communicate Delamare et al.(2012) Vision, Light, gesture recognition Ray casting metaphor was implemented where object is out of reach yet in line of sight. Volume selection helped to solve the problem of accuracy Starner et al.(2000) Vision, Light, Gesture recognition Pendant consists of camera that recognises user‟s gestures and thereby giving him/her the control over appliances. Harrison et al.(2011) Vision, Multi Finger Tracking A wearable system is built by tracking multi touch finger movement involving finger click detection by classifying different surfaces. It also involves depth driven object recognition. Geurts et al.(2011) Gesture recognition 4 mini games were built for patients lacking motor control. Sensor was used to track the arm/head movement.
  • 4.
    A Literature ReviewOf Designing, Modality... www.ijesi.org 21 | Page Table 2.1 pairs interface-modality and summarises the work done by past researchers in terms of design. This kind of classification has not been done before.It lessens the confusion and makes a clear distinction between kind of modality applied for particular interface type.It also lists different visualisation techniques focused and methodology implemented. The table clearly indicates the popularity of multimodality with graphical interfaces since most of the work is done in this domain. Figure 2.1 Classification of different interfaces based on modality listing different visualisation techniques used. Authors Main Objective Problems faced Beyer,Holtzb latt(1993) To include user‟s point of view in designing products by collecting data using ethnographic techniques of observing and questioning customers while they work. Usability tests were conducted for the same. User surveys are not always accurate. Most information is unconscious and tacit. Contextual data cannot be used to show trend. Different people observe differently. Awareness and willingness to adapt to the change effects the timeline of the project. Landauer(19 88) Goals of conducting research in HCI are comparison of existing systems, invention/design of new systems, discovering /testing relevant scientific principles and establishing guidelines and standards to meet user‟s requirements. Unreliability of intuition and variability of human behaviour. One who has used the system repeatedly will be having a biased outlook. Features evaluated in isolation may not give accurate results. Too many variables, design problems, parameters, different kind of users and tasks complicate the process of evaluation. Tang et al. (2012) Empirical study is conducted to find out how FPS players overcome coordination problems in shared voice channel by conducting online surveys and competitive tournaments FPS games challenge team coordination. It is difficult to locate the teammate, find out what they are looking at and how do they interpret. It is difficult to maintain awareness of their environment and develop codes for meaningful communication Addlesee et al.(2001) It discovers “sentinent computing” where application understands the perception of the user. Sentinent interfaces are expensive to build. It has not yet achieved commercial worth.
  • 5.
    A Literature ReviewOf Designing, Modality... www.ijesi.org 22 | Page Czerwinski & Horvitz(200 2) A study was conducted to investigate memory for daily computing events. Video clips of participants were collected for the same. Participants were asked to recall the events. According to the user‟s feedback, navigation controls could have been better along with general affordance of the prototype. Automated system should be able to identify all the events. Salamin et al.(2010) Multimodal technique is developed to perform ubiquitous computing by building context aware systems and ontology based on semantics for users with special needs. System is divided into 3 parts Context(activity user is engaged in ), Content(info user wants to seek/ input),Rendering application(output provided ) Limited choice of inputs. Set up was GUI, making it less efficient for blind users. More scenarios and user profiles should have been tested to prove the worth of the system Lavie,Meyer (2010) Evaluates the effect of adaptive user interface. Adaptive levels vary from manual to fully adaptive. Cognitive and Physical tasks were accounted including Routine and non routine situations tested with different user age groups. AUI is useful as long as situations are known. It cannot adjust itself to the unknown situations. Dynamic environmental factors proved to be constraints for AUI Iqbal,Hortvit z(2007) Field study of the multitasking behaviour of computer users focused on the suspension and resumption of tasks was conducted. Tasks were email alerts, incoming instant messages. Users view alerts as an awareness mechanism rather than a trigger to switch tasks Immediate responses indicate alert- driven interruptions and Delayed responses indicate self- initiated interruptions. Users spend more time than they realize responding to alerts Iqbal et al.(2010) Identifying better and worse times of conversation while driving by examining interference of cognitive load. Attending phone calls while driving have catastrophic effects. Drivers have slower braking reaction time, have impaired steering control, and more likely to have an accident. Contradicting the above, some drivers sub consciously increased their awareness and became alert while talking on phone thereby increasing performance. Table 2.2 lists contribution of researchers in developing context aware systems. Efforts have been made to understand user‟s needs by observing the surrounding, collecting feedbacks and running various experiments. It is a way of collecting tangible information that can improve the performance of the system. It also lists the problems faced during the experiments such as unskilled and unreliable users, users with different potential, unfavourable environment and poor design. This table gives an insight of running smooth experiments with people by avoiding mistakes already listed. Sometimes context aware systems also understand user‟s perception and their learning styles thus it is interrelated to psychology which is discussed in the next section. III. HCI AND PSYCHOLOGY Psychology is an academic and applied discipline that involves scientific study of mental functions and behaviours. It is a vast domain that includes different approaches to understand human behaviour such as Cognitive Psychology, Developmental Psychology, Social Psychology and Educational Psychology. Psychology is directly related to HCI where HCI is a science of design, seeking to understand and support human being interacting with the technology. Development of GUI was influenced by psychological research (Johnson 1989). As applications move from desktop to mobile, wider set of users, immense environments, it became difficult for
  • 6.
    A Literature ReviewOf Designing, Modality... www.ijesi.org 23 | Page people to understand different aspects of the digital world. It also became difficult for designers to satisfy people in terms of usability, Olson et al. (2003). In this study, review of certain psychological theories are conducted that influence the design of the interface discussed in the above section, in HCI.Cognitive psychology is the study of how people think and learn. It helped HCI to develop models that explain and predict human performance. The goal of cognitive psychology is to understand the psychological processes involved in the acquisition and use of knowledge by people. This includes domains such as perception, attention, memory, learning, thinking, and the importance of social and environmental influences on those domains, Giacoppo (2001).Developmental Psychology seeks to understand how people come to perceive, understand, and act within the world and how these processes change as they age. It is necessary in HCI to build efficient applications like building an online learning game for 10 years old boy requires designers to study about various development stages the child goes through in previous years in order to build something useful for him.Social Psychology is the scientific study of how people's thoughts, feelings, and behaviours are influenced by the actual, imagined, or implied presence of others. Social networking websites explored this area of psychology in order to add impressive features in their interface. They understood the requirements of their customers in terms of satisfying their ego and gaining appreciation from their peer group by increased number of ‟ Face book likes‟ for their uploaded content. Educational Psychology is the psychology of teaching. As interfaces became multi modal, educating users became important. Collaborative virtual environment exists where people are represented as Avatars- simple digital representation of people who move in 3D space. Problem faced during this virtual interaction is lack of mutual awareness among Avatars. Designers also need to be taught to revise and follow the basic design rule”Keep it Simple” while designing multi modal interfaces Design theories- are derived from Psychology. They are more explanatory and provide guidelines for the design of the interface. Designing interfaces requires decision about which modality to use and how to mix different modality which further requires an understanding of brain anatomy. Wickens (2008) build 4D multiple resource model and mental overload where he discussed how resources can be shared in finite time by different tasks and how limited mental resource can degrade the performance if “demanded” in exceeding capacity. He also categorised tasks as Primary and Secondary. His study is important in order to take design decisions like use keyboards or voice, Symbols or text .Multiple resource theory states that multiple tasks can be done efficiently by human if they are using separate cognitive resource(short term, long term memory, attention, and reasoning),Wickens (2002) Iqbal et al. (2010) conducted a controlled study with 18 participants who drove within an interactive driving simulator. Drivers drove at paths having difficult navigation challenges. They had to attend phone call while driving. Their performance was better on simple routes. Three factors were explored in the study. Driving complexity (sudden brakes, missed turns, collision), call types (assimilate, retrieve, generate) and focus (mobile, driving, both).Result shows that simple routes are safest for answering phone calls. Cognitive resource demand was higher than its availability while driving at complicated routes and answering questions from memory thus decreasing the performance. On the contrary some drivers sub consciously increased their awareness and became alert while talking on the phone thus increasing the performance. Hence deeper understanding of cognition is needed during multi tasking, Iqbal et al. (2010). Iqbal,Bailey (2005) has shown that interruptions during periods of higher mental workload cause users to take longer to resume their suspended tasks and have larger negative effect. Mark et al. (2005) understood the influence of interruptions on task switching and found that users frequently switch between tasks and 57% of their activities are interrupted.
  • 7.
    A Literature ReviewOf Designing, Modality... www.ijesi.org 24 | Page Figure 3: Models developed in Psychology in context of HCI Author Objective Limitations Lindsay & Norman (1977) Humans were characterised as information processors. The model explains the movement of information from input to output within a human, via a series of processing stages. The stages of processing are encoding, comparison, response selection, and response execution. User‟s perception, behaviour, learning technique was not accounted. Barber (1988) An expansion of Lindsay and Norman‟s model which includes Attention and memory processes to interact with series of processing stages. It includes how information is perceived, attended, processed and stored in the memory. Attention and Memory are used in generalised form Atkinson and Shiffrin (1968) Multi store model of Memory was developed where Memory is sub categorised into 3 types Sensory Memory: lasts few seconds, holds limited amount of information which gets lost if not attended. Short term memory: temporary-lasts about 20 seconds. Limited storage capacity. Long term memory: Permanent, infinite, can last lifetime. Hacker(1973, 1978,1985,1986) Hacker‟s action theory explains the determinants, processes and consequences of work behaviour. The main components of Action Theory are acts, actions, and operations.  Acts: motivated and regulated by intentions (i.e. higher order goals), and realized through actions.  Actions: The smallest units of cognitive and sensory-motor processes that is oriented towards conscious goals.  Operations: Components of actions that have no independent goals Users constantly change, alter and vary the actions to achieve the goal. Thus regulation of selection of actions is necessary in order to predict user‟s behaviour. It is still far from being a prescriptive theory, which will guide software designers through use of different components.
  • 8.
    A Literature ReviewOf Designing, Modality... www.ijesi.org 25 | Page Norman(1988) It provides a list of the stages that users go through in trying to use a system  Forming the goal  Forming the intention  Specifying the action  Executing the action  Perceiving the system state  Interpreting the system state  Evaluating the outcome DESIGN THEORIES Wickens(2008) 4D multiple resource model was developed for multiple resource theory. It can be used as design tool and also to predict multitask work overload. D1. Stage of processing(cognition, perception, response) D2. Codes of processing(Spatial activity, Linguistic activity) D3.Modalities(Auditory, Visual) D4. Visual channels(Focal, Ambient) Tactile input-other level to modality should have been added. Outside unwanted interruption that can affect time sharing among dual tasks are unidentified. Inability to realise resource demand Unidentified factors that drives the allocation policy, There is a difference in labs and real world. Card et al.(1983) GOMS model stands for Goals, Operators, Methods and Selection rules. Goals represent user‟s goals.what does user want to accomplish. Operators are the actions that the software allows the user to take. Methods are well-learned sequences of subgoals and operators that can accomplish a goal. Selection rules are the personal rules that users follow in deciding what method to use in a particular circumstance Pros of GOMS are that it can predict the performance of a system before developing the design. No special skills are required. It has proven to be efficient. Cons of following GOMS model are that it is time consuming, assumes error free Expert behaviour, routine tasks. Requires significant time investment. Ignores parallel processing, problem solving. Table 3 summarises different psychological theories developed in context of HCI. There are cognitive theories (understanding human mind) and design theories (applied psychology to improve design) listed. This classification helps in understanding the research going on in the field of psychology that has a significant impact on systems designed in HCI. IV. FUTURE WORKS Psychology is still distant from HCI. It is an independent area of research. While working on this paper, it has been realised that psychology is not widely applied in HCI. There is not enough of research on psychology of designers in choosing different interface-modality pair. Due to time constraint, it was not possible to study different aspects of applied psychology involved in multimodal interfaces, if there exists any. Rather general representation of theoretical models was provided. V. CONCLUSIONS This research paper summarises different interfaces existing in HCI. It categorised them on the basis of modality implemented. It lists down the work done so far in the design of different interfaces based on different modalities. It also lists down the work done in order to understand user‟s requirements and context aware systems. It lists the psychological theories and models developed in context of HCI. REFRENCES [1] Sebe,N.Multimodal interfaces: Challenges and perspectives.Journal of Ambient Intelligence and Smart Environments, 2009 [2] Lavie,T., Meyer,J. Benefits and costs of adaptive user interfaces. International Journal of Human computer Interaction,2010 [3] .Busso,C., Deng,Z.,Yildirim,S.,Bulut,M.,Lee,C.M., Kazemzadeh,A.,Lee,S., [4] Neumann,U.,& Narayanan,S.Analysis of Emotion Recognition using Facial
  • 9.
    A Literature ReviewOf Designing, Modality... www.ijesi.org 26 | Page [5] Expressions, Speech and Multimodal Information.ICMI, 2004 [6] Giacoppo,A.S.The role of theory in HCI.Department of Psychology,Catholic University, 2001. [7] Salamin,P.,Thalmann,D., &Vexo,F.Context aware, multimodal, and semantic rendering engine. EPFL,2010 [8] Geurts,L.,Abeele,V., Husson,J., Windey,F.,Van Over,M., Annema,J.H., &Desmet,S. [9] Digital Games for Physical Therapy: Fulfilling the Need for Calibration and Adaptation.TEI,2011 [10] Kaplan,J.,Yankelovich,N.Open Wonderland: Extensible Virtual World Architecture. [11] IEEE, 2011 [12] Ji,Q.,Yang,X.Real-Time Eye, Gaze, and Face Pose Tracking for Monitoring Driver Vigilance. Elsevier Science Ltd, 2002 [13] Starner,T., Auxier,J.,&Ashbrook,D.The Gesture Pendant: A Self-illuminating, Wearable, Infrared Computer Vision System for Home Automation Control and Medical Monitoring.IEEE,2000 [14] Landauer,T.K.Research Methods in Human-Computer Interaction.Handbook of Human-Computer Interaction,1988 [15] .Oviatt,S. Mutual Disambiguation of Recognition Errors in a Multimodal Architecture. CHI,1999 [16] Jaimes,A.,Sebe,N.Multimodal human–computer interaction: A survey.Computer vision and image understanding,2007 [17] Harrison,C.,Benko,H.,&Wilson,A.D.OmniTouch: Wearable Multitouch Interaction Everywhere.UIST,2011 [18] Saponas,T.S., Harrison,C.,& Benko,H.PocketTouch: Through-Fabric Capacitive Touch Input.UIST,2011 [19] Delamare,W.,Coutrix,C.,& Nigay,L.Pointing in the Physical World for Light Source Selection.EICS,2013 [20] Addlesee,M.,Curwen,R.,Hodges,S.,Newman,J.,Steggles,P.,Ward, A.Implementing a sentinent computing system.IEEE,2001 [21] .Ishii,H.Tangible User Interfaces.CHI, 2006 [22] Reilly,D.F.,Rouzati,H.,Wu,A.,Hwang, J.Y.,Brudvik,J.,&Edwards,W.K.TwinSpace: an Infrastructure for Cross-Reality Team Spaces .UIST,2010 [23] Reilly,D.,& Inkpen,K.White Rooms and Morphing don‟t mix: Setting and the Evaluation of Visualization Techniques,CHI,2007 [24] Leung,K., Reilly,D.,Hartman,K.,Stein,S.,&Westecott,E. Limber: DIY Wearables For Reducing Risk Of Office Injury.TEI,2012. [25] Reilly,D., Rodgers,M., Argue,R.,Nunes,M., &Inkpen,K. Marked-up Maps: Combining Paper Maps and Electronic Information Resources.Personal and Ubiquitous Computing,2006 [26] Kellar,M.,Reilly,D.,Hawkey,K.,Rodgers,M.,MacKay,B.,Dearman,D.,Ha,V., MacInnes,W.J.,Nunes,M., Parker,K.,Whalen,T.,& Inkpen,K.M. It‟s a Jungle Out There: Practical Considerations for Evaluation in the City.CHI, 2005 [27] Wu1,A., Reilly,D., Tang, A.,& Mazalek,A. Tangible Navigation and Object Manipulation in Virtual Environments.TEI,2011 [28] Tang,A.,Massey,J.,Nelson, W.,Reilly,D., Edwards,W.K.Verbal Coordination in First Person Shooter Games. CSCW, 2012 [29] Dumais,S.,& Czerwinski,M. Building Bridges from Theory to Practice.One Microsoft Way,2001 [30] Wickens,C.D.Multiple Resources and Mental Workload. Golden Anniversary special Issue, June 2008 [31] Olson,G.M.,Olson,J.S.Human Computer Interaction:Psychological Aspects of the Human Use of Computing,2003 [32] Wickens,C.D. Multiple Resources and Performance Prediction ,Theoretical Issues in Ergonomic Science, 3 (2),159-177,2002. [33] Iqbal,S.T.,Ju ,Y.C., & Horvitz,E. Cars, Calls, and Cognition: Investigating Driving and Divided Attention.CHI,2010 [34] Iqbal,S.T.,& Horvitz,E. Disruption and Recovery of Computing Tasks: Field Study, Analysis, and Directions.CHI,2007 [35] .Iqbal ,S.T.,& Bailey,B.P. Investigating the Effectiveness of Mental Workload as a Predictor of Opportune Moments for Interruption.CHI 2005,1489-1492. [36] Mark,G., Gonzalez,V.M.,& Harris,J.No task left behind?: examining the nature of fragmented work.CHI ,2005, 321-330. [37] Czerwinski,M., &Horvitz,E. An Investigation of Memory for Daily Computing Events. Microsoft Research,One Microsoft Way,2002 [38] 34 Card,S., Moran ,T.P.,& Newell,A.The Psychology of Human Computer Interaction. Lawrence Erlbaum Associates,1983