The Conference Assistant: Combining Context-Awareness with Wearable
Computing
Anind K. Dey, Daniel Salber, Gregory D. Abowd Masayasu Futakawa
GVU Center, College of Computing Hitachi Research Laboratory
7-1-1 Omika-cho
Georgia Institute of Technology Hitachi-shi, Ibaraki-ken, 319-1221, Japan
Atlanta, GA 30332-0280 81-294-52-5111
+1 404 894 7512
[email protected] {anind, salber, abowd}@cc.gatech.edu
http://www.cc.gatech.edu/fce/contexttoolkit
Abstract and information to the user. While context is important to
We describe the Conference Assistant, a prototype mobile computing in general, it is of particular interest to
mobile, context-aware application that assists conference wearable computing. This is evident from the number of
attendees. We discuss the strong relationship between papers dealing with context-awareness in the previous
context-awareness and wearable computing and apply Symposiums on Wearable Computers.
this relationship in the Conference Assistant. The Rhodes [13] presented a list of defining characteristics
application uses a wide variety of context and enhances for wearable computers. In each of these features, context
user interactions with both the environment and other plays an important role.
users. We describe how the application is used and the Portable while operational: A wearable computer is
context-aware architecture on which it is based. capable of being used while the user is mobile. When
a user is mobile, her context is much more dynamic.
She is moving through new physical spaces,
encountering new objects and people. The services and
1. Introduction information she requires will change based on these
new entities.
In human-human interaction, a great deal of Hands-free use: A wearable computer is intended to be
information is conveyed without explicit communication, operated with the minimal use of hands, relying on
but rather by using cues. These shared cues, or context, speech input or one-handed chording-keyboards and
help to facilitate grounding between participants in an joysticks. Limiting the use of traditional input
interaction [3]. We define context to be any information mechanisms (and somewhat limiting the use of
that can be used to characterize the situation of an entity, explicit input) increases the need to obtain implicitly
where an entity can be a person, place, or physical or sensed contextual information.
computational object. Sensors: To enhance the explicit user input, a wearable
In human–computer interaction, there is very little computer should use sensors to collect information
shared context between the human and the computer. about the user’s surrounding environment. Rhodes
Context in human-computer interaction includes any intended that the sensors be worn on the body, but the
relevant information about the entities in the interaction real goal is for the sensed information to be available
between the user and computer, including the user and to the wearable computer. This means that sensors can
computer themselves. By improving computers’ access to not only be on the body, but also be in the
context, we increase the richness of communication in environment, as long as the wearable computer has a
human-computer interaction and make it possible to method for obtaining the sensed environmental
produce more useful computational services. We define information.
applications that use context to provide task-relevant Proactive: A wearable computer should be acting on its
information and/or services to a user to be context-aware. user’s behalf even when the user is not explicitly using
Context rapidly changes in situations where the user is it. This is the essence of context-aware computing: the
mobile. The changing context can be used to adapt the computer analyzes the user’s context and makes task-
user interface to an application, providing relevant services relevant information and services available to the user,
interrupting the user when appropriate.
Always on: A wearable computer is always on. This is
important for context-aware computing because the
wearable computer should be continuously monitoring
the user’s situation or context so that it can adapt and in the retrieval of conference information after the
respond appropriately. It is able to provide useful conference concludes.
services to the user at any time. A wearable computer is very appropriate for this
application. The Conference Assistant uses a wide variety
From a survey on context-aware computing [6], we of context: time, identity, location, and activity. It
found that most context-aware applications use a minimal promotes interaction between simultaneous users of the
variety of context. In general, the use of context is limited application and has a large degree of interaction with the
to only identity and location, neglecting both time and user’s surrounding environment. Revisiting Rhodes’ list
activity. Complex context-aware applications are difficult of wearable computer characteristics, we can show how the
to build. By complex, we mean applications that not only domain is applicable for wearable computing.
deal with a wide variety of context, but also that take into Portable while operational: During a conference, a user is
account the contexts of multiple people or entities, real- mobile, moving between presentation and
time context as well as historical context, that use a demonstration spaces, with rapidly changing context.
single piece of context for multiple purposes, and that Hands-free use: During a conference, hands should be free
support interactions between multiple users, mobile and to take notes, rather than interacting with a computer
wearable computers and the environment. This family of to collect context information.
applications is hard to implement because there has been Sensors: Sensors in the environment can provide useful
little support for thinking about and designing them. information about the conference to the user, including
This paper describes the design of a complex context- presentation information and activities of colleagues.
aware application that addresses these issues. We will Proactive and Always on: In a conference, a user wants to
present the Conference Assistant, a context-aware pay attention to what is being presented while
application for assisting conference attendees and maintaining an awareness of other activities. A
presenters. We demonstrate how context is used to aid wearable computer can provide this awareness without
users and describe how the application was built. In explicit user requests.
particular, we will present some concepts that make it
easier to design complex context-aware applications. 2.2. User Scenario
2. The Conference Assistant Now that we have demonstrated the utility of context-
awareness and wearable computing in the conference
In this section, we will present a complex prototype domain, we will present a user scenario for the Conference
application, the Conference Assistant, which addresses the Assistant. A user is attending a conference. When she
deficiencies we pointed out in previous context-aware arrives at the conference, she registers, providing her
applications. The Conference Assistant is a context-aware contact information (mailing address, phone number, and
application intended for use by conference attendees. We email address), a list of research interests, and a list of
will describe the conference domain and show why it is colleagues who are also attending the conference. In
appropriate for context-awareness and wearable computing return, she receives a copy of the conference proceedings
and provide a scenario of use. We will then discuss the and an application, the Conference Assistant, to run on
context used in this application and how it was used to her wearable computer. When she starts the application, it
provide the user benefit. We will end with a discussion automatically displays a copy of the conference schedule,
on the types of context-aware features the Conference showing the multiple tracks of the conference, including
Assistant supports. both paper tracks and demonstration tracks. On the
schedule (Figure 1), certain papers and demonstrations are
2.1. Conference Domain highlighted (light gray) to indicate that they may be of
particular interest to the user.
The Conference Assistant was designed to assist
people attending a conference. We chose the conference
domain because conferences are very dynamic and involve
an interesting variety of context. A conference attendee is
likely to have similar interests as other attendees. There is
a great deal of concurrent activity at large conferences Figure 1. Screenshot of the augmented schedule, with
including paper presentations, demonstrations, special suggested papers and demos highlighted (light-colored
interest group meetings, etc., at which a large amount of boxes) in the three (horizontal) tracks.
information is presented. We built the Conference The user takes the advice of the application and walks
Assistant to help users decide which activities to attend, towards the room of a suggested paper presentation. When
to provide awareness of the activities of colleagues, to she enters the room, the Conference Assistant
enhance interactions between users and the environment, automatically displays the name of the presenter and the
to assist users in taking notes on presentations and to aid title of the presentation. It also indicates whether audio
and/or video of the presentation are being recorded. This
impacts the user’s behavior, taking fewer or greater notes conference for taking notes on both demonstrations and
depending on the extent of the recording available. The paper presentations.
presenter is using a combination of PowerPoint and Web
pages for his presentation. A thumbnail of the current
slide or Web page is displayed on the wearable computer
display. The Conference Assistant allows the user to
create notes of her own to “attach” to the current slide or
Web page (Figure 2). As the presentation proceeds, the
application displays updated slide or Web page
information. The user takes notes on the presented
information using the Conference Assistant. The
presentation ends and the presenter opens the floor for
questions. The user has a question about the presenter’s
tenth slide. She uses the application to control the
presenter’s display, bringing up the tenth slide, allowing
everyone in the room to view the slide in question. She
uses the displayed slide as a reference and asks her
question. She adds her notes on the answer to her
previous notes on this slide.
Figure 4. Screenshots of the retrieval application: query
interface and timeline annotated with events (4a) and
captured slideshow and recorded audio/video (4b).
She returns home after the conference and wants to
retrieve some information about a particular presentation.
The user executes a retrieval application provided by the
Figure 2. Screenshot of the Conference Assistant note- conference. The application shows her a timeline of the
taking interface. conference schedule with the presentation and
demonstration tracks (Figure 4a). The application uses a
feature known as context-based retrieval [9]. It provides a
query interface that allows the user to populate the
timeline with various events: her arrival and departure
from different rooms, when she asked a question, when
other people asked questions or were present, when a
presentation used a particular keyword, or when audio or
Figure 3. Screenshot of the partial schedule showing the
location and interest level of colleagues. Symbols indicate
video were recorded. By selecting an event on the
interest level.
timeline (Figure 4a), the user can view (Figure 4b) the
slide or Web page presented at the time of the event,
After the presentation, the user looks back at the audio and/or video recorded during the presentation of the
conference schedule display and notices that the slide, and any personal notes she may have taken on the
Conference Assistant has suggested a demonstration to see presented information. She can then continue to view the
based on her interests. She walks to the room where the current presentation, moving back and forth between the
demonstrations are being held. As she walks past presented slides and Web pages.
demonstrations in search of the one she is interested in, In a similar fashion, a presenter can use a third
the application displays the name of each demonstrator application to retrieve information about his/her
and the corresponding demonstration. She arrives at the presentation. The application displays a timeline of the
demonstration she is interested in. The application presentation, populated with events about when different
displays any PowerPoint slides or Web pages that the slides were presented, when audience members arrived and
demonstrator uses during the demonstration. The left the presentation (and their identities), the identities of
demonstration turns out not to be relevant to the user and audience members who asked questions and the slides
she indicates her level of interest to the application. She relevant to the questions. The interface is similar to that
looks at the conference schedule and notices that her shown in figure 4. The presenter can ‘relive’ the
colleagues are in other presentations (Figure 3). A presentation, by playing back the audio and/or video, and
colleague has indicated a high level of interest in a moving between presentation slides and Web pages.
particular presentation, so she decides to leave the current
demonstration and to attend this presentation. The user
continues to use the Conference Assistant throughout the
2.3. Use of Context retrieve the appropriate slide or Web page and any
recorded audio/video associated with the context.
The Conference Assistant uses a wide variety of After the conference, a presenter can also use the
context to provide both services and information to users. conference context to obtain information relevant to
We will first describe the context used in real time to his/her presentation. The presenter can obtain information
assist a conference attendee during a conference and then about who was present for the presentation, the times at
will describe the historical context used after the which each slide or Web page was visited, who asked
conference by a conference attendee and a presenter. questions and about which slides. Using this information,
When the user is attending the conference, the along with the text captured from each slide and any
application first uses information about what is being audio/video recorded, the presenter can playback the entire
presented at the conference and her personal interests to presentation and question session.
determine what presentations might be of particular
interest to her. The application uses her location, the 2.4. Context-aware Features
activity (presentation of a Web page or slide) in that
location and the presentation details (presenter, Pascoe [11] introduced a set of four context-aware
presentation title, whether audio/video is being recorded) capabilities that applications can support. We will present
to determine what information to present to her. The text each capability and will show how the Conference
from the slides is being saved for the user, allowing her to Assistant supports each one.
concentrate on what is being said rather than spending Contextual sensing: A system detects context and
time copying down the slides. The context of the simply presents it to the user, augmenting the user’s
presentation (presentation activity has concluded, and the sensory system. The Conference Assistant presents the
number and title of the slide in question) facilitates the user’s current location, name of the current presentation
user’s asking of a question. The context is used to control and presenter, location of colleagues, and colleagues’ level
the presenter’s display, changing to a particular slide for of interest in their presentations.
which the user had a question. Contextual adaptation: A system uses context to
The list of colleagues provided during registration adapt its behavior instead of providing a uniform interface
allows the application to present other relevant in all situations. When a new presentation slide or Web
information to the user. This includes both the locations page is presented, the Conference Assistant saves the
of colleagues and their interest levels in the presentations user’s notes from the previous slide and creates an empty
they are currently viewing. This information is used for textbox in which notes on the new current slide can be
two purposes during a conference. First, knowing where entered.
other colleagues are helps an attendee decide which Contextual resource discovery: A system can locate
presentations to see herself. For example, if there are two and use resources that share part or all of its context.
interesting presentations occurring simultaneously, When a user enters a presentation/demonstration area, the
knowing that a colleague is attending one of the Conference Assistant creates a temporary bind to the
presentations and can provide information about it later, a presentation server in the local environment. The shared
user can choose to attend the other presentation. Second, context is location. This binding allows the application
as described in the user scenario, when a user is attending to obtain changes to the local presentation/demonstration.
a presentation that is not relevant or interesting to her, she Contextual augmentation: A system augments the
can use the context of her colleagues to decide which environment with additional information, associating
presentation to move to. This is a form of social or digital data with the current context. All the notes that a
collaborative information filtering [15]. user makes on presented information are augmented with
After the conference, the retrieval application uses the contextual information (location, presentation title and
conference context to retrieve information about the presenter, and time). This augmentation supports retrieval
conference. The context includes public context such as of the notes using context-based retrieval techniques.
the time when presentations started and stopped, whether The Conference Assistant exploits all four of the
audio/video was captured at each presentation, the names context-aware capabilities presented by Pascoe. These
of the presenters, the presentations and the rooms in capabilities are used to provide substantial benefits to
which the presentations occurred and any keywords the both conference attendees and presenters.
presentations mentioned. It also includes the user’s
personal context such as the times at which she entered
and exited a room, the rooms themselves, when she asked 3. Application Design
a question and what presentation and slide or Web page
the question was about. The application also uses the In this section, we describe the design of the Conference
context of other people, including their presence at Assistant. We illustrate the software architecture of the
particular presentations and questions they asked, if any. application, as well as the context-aware architecture it
The user can use any of this context information to was built on top of. We discuss the concepts the
architecture supports that make it easier to build and
evolve context-aware applications. Finally, we also interested in multiple pieces of context about a single
describe the hardware used to deploy the application. entity.
A context interpreter is used to abstract or interpret
3.1. Software context. For example, a context widget may provide
location context in the form of latitude and longitude, but
The Conference Assistant is a complex context-aware an application may require the location in the form of a
application. It uses a wide variety of context, supporting street name. A context interpreter may be used to provide
both interaction between a single user and the this abstraction.
environment and between multiple users. This application Context components are instantiated and executed
would have been difficult to build without a great deal of independently of each other in separate threads and on
underlying support. It was built on top of an architecture separate computing devices. The architecture makes the
designed to support context-aware applications [5]. We distribution of the context architecture transparent to
will first briefly describe this architecture and then will context-aware applications, mediating all communications
show how the architecture was used to build the between applications and components. It supports
Conference Assistant. communications using the HyperText Transfer Protocol
(HTTP) for both the sending and receiving of messages.
The language used for sending data is the eXtensible
3.1.1. Context Architecture1 Markup Language (XML). XML and HTTP were chosen
because they support lightweight integration of distributed
In previous work [5,14], we presented an architecture components and facilitate access to the architecture on
and toolkit that we designed and implemented to support heterogeneous platforms with multiple programming
building of context-aware applications. We will briefly languages. The only requirements on devices using the
discuss the components of the architecture and its merits. architecture are that they support ASCII parsing and
The architecture consists of three types of components: TCP/IP. These minimal requirements are particularly
widgets, servers, and interpreters. They implement the important for mobile and wearable computers, for which
concepts necessary for easing the development of context- communications support tends to be small.
aware applications. Figure 5 shows the relationship The context architecture promotes three main concepts
between the context components and applications. for building context-aware applications: separation of
Context widgets encapsulate information about a context sensing from context use, aggregation, and
single piece of context, such as location or activity, for abstraction. It relieves application developers from having
example. They provide a uniform interface to components to deal with how to sense and access context information,
or applications that use the context, hiding the details of and instead, concentrate on how to use the context. It
the underlying context-sensing mechanism(s). provides simplifying abstractions like aggregation and
abstraction to make it easier for applications to obtain the
context they require. Aggregation provides “one-stop
shopping” for context about an entity, allowing
application designers to think in terms of high level
information, rather than low-level details. The architecture
makes it easy to add the use of context to existing
applications that don’t use context and to evolve
applications that already use context. In addition, the
architecture makes context-aware applications resistant to
changes in the context-sensing layer. It encapsulates
changes and the impact of changes, so applications do not
need to be modified.
3.1.2. Software Design
Figure 5. Relationship between applications and the
context architecture. Arrows indicate data flow.
The Conference Assistant was built using the context
A context server is very similar to a widget, in that it architecture just described. Table 1 lists all the context
supports the same set of features as a widget. The components used and Figure 6 presents a snapshot of the
difference is that a server aggregates multiple pieces of architecture when a user is attending a conference.
context. In fact, it is responsible for all the context about a
particular entity (person, place, or object). Aggregation During registration, a User Server is created for the
facilitates the access of context by applications that are user. It is responsible for aggregating all the context
information about the user and acts as the application’s
1 interface to the user’s personal context information. It
For more information on the context architecture, see subscribes to information about the user from the public
http://www.cc.gatech.edu/fce/contexttoolkit
Registration Widget, the user’s Memo Widget and the to the user’s User Server for changes in location and
Location Widget in each presentation space. The Memo interests. It subscribes to the colleagues’ User Servers for
Widget captures the user’s notes and also any relevant changes in location and interest level. It also subscribes to
context (relevant slide, time, and presenter identity). the local Presentation Server for changes in a presentation
slide or Web page when the user enters a presentation
Table 1. Architecture components and responsibilities: space and unsubscribes when the user leaves.
S = Servers, W = Widgets, I = Interpreters In the conference attendee’s retrieval application, all
Component Responsibility the necessary information has been stored in the user’s
Registration (W) Acquires contact info, interests, and User Server and the public Presentation Servers. The
colleagues architecture for this application is much simpler, with the
Memo (W) Acquires user’s notes and relevant retrieval application only communicating with the user’s
presentation info User Server and each Presentation Server. As shown in
Recommender(I) Locates interesting presentations Figure 4, the application allows the user to retrieve slides
User (S) Aggregates all information about user (and the entire presentation including any audio/vide)
using context via a query interface. If personal context is
Question (W) Acquires audience questions and
used as the index into the conference information, the
relevant presentation info
application polls the User Server for the times and
Location (W) Acquires arrivals/departures of users location at which a particular event occurred (user entered
Content (W) Monitors PowerPoint or Web page or left a location, or asked a question). This information
presentation, capturing content changes can then be used to poll the correct Presentation Server for
Recording (W) Detects whether audio/video is recorded the related presentation information. If public context is
Presentation (S) All information about a presentation used as the index, the application polls all the
Presentation Servers for the times at which a particular
There is a Presentation Server for each physical event occurred (use of a keyword, presence or question by
location where presentations/demos are occurring. A a certain person). As in the previous case, this information
Presentation Server is responsible for aggregating all the is then used to poll the relevant Presentation Servers for
context information about the local presentation and acts the related presentation information.
as the application’s interface to the public presentation In the presenter’s retrieval application, all the
information. It subscribes to the widgets in the local necessary information has been stored in the public
environment, including the Content Widget, Location Presentation Server used during the relevant presentation.
Widget, Recording Widget and Question Widget. The architecture for this application is simple as well,
with the retrieval application only communicating with
the relevant Presentation Server. As shown in Figure 4,
the application allows the user to replay the entire
presentation and question session, or view particular
points in the presentation using context-based retrieval.
Context includes the arrival and departure of particular
audience members, transitions between slides and/or Web
pages, and when questions were asked and by whom.
3.2. Hardware
The Conference Assistant application is being executed
on a variety of different platforms, including laptops
running Windows 95/98 and Hewlett Packard 620LX
Figure 6. Conference Assistant capture architecture.
WinCE devices. It was not actually run on a wearable
When an audience member asks a question using the computer, but there is no reason why it could not be. The
Conference Assistant, the Question Widget captures the only requirements are constant network access and a
context (relevant slide, location, time, and audience graphical display. For communications with the context
member identity) and notifies the local Presentation architecture, we use Proxim’s RangeLAN2 1.6 Mbps
Server of the event. The server stores the information and wireless LAN for WinCE devices and RadioLan’s
also uses it to access a service provided by the Content 10BaseRadio 10 Mbps wireless LAN.
Widget, displaying the slide or Web page relevant to the The retrieval applications are running on desktop
question. machines, under both the Windows 95/98 and Solaris
The Conference Assistant does not communicate with operating systems.
any widget directly, but instead communicates only with The context components were executed on a number of
the user’s User Server, the User Servers belonging to each different computers running different operating systems.
colleague and the local Presentation Server. It subscribes This includes Powermac G3’s running MacOS, Intel
Pentiums running Windows 95 and Windows NT, and One of the most important projects in context-based
SPARC 10s running Solaris. retrieval was the Forget-me-not system from Lamming
To sense identity and location of the conference and Flynn [9]. It kept a record of a person’s activity
attendees and presenters, we use PinPoint Corporation’s throughout the day in a diary format, allowing retrieval of
3D-iDTM Local Positioning System. This system uses the activity information based on context. Rhodes’
radio frequency-based (RF) tags with unique identities and wearable Remembrance Agent used context information
multiple antennas to locate users in a physical space. about notes a user wrote, such as co-located people,
Although, it can provide location information at the location, and time to allow automatic retrieval of those
resolution of 6 feet, we used coarser-grained information to notes that most closely matched the user’s current context
determine when users entered a room. [13]. Rekimoto et al. used an augmented reality system
to attach notes to objects or locations [12]. When users
4. Related Work approached those objects or locations, the note was
retrieved. This is similar to the Locust Swarm project by
In this section, we will discuss other work that is Starner et al [16], which allowed the attachment and
relevant to the Conference Assistant, in the areas of retrieval of notes from infrared-based location tags.
conference assistants, context-aware tour guides, note
taking, and context-based retrieval. 5. Conclusions and Future Application Work
There has been little work in the area of context-
awareness in a conference setting [10]. In the Mobile We have presented the Conference Assistant, a
Assistant Project, Nishibe et al. deployed 100 handheld prototype mobile, context-aware application for assisting
computers with cell phones at the International Conference conference attendees in choosing presentations to attend,
on Multiagent Systems (ICMAS ’96). The system taking notes, and retrieving those notes. We discussed the
provided conference schedule and tourist information. important relationship between context-awareness and
Social filtering, using the queries of other conference wearable computing. We demonstrated this relationship
attendees, was used to determine relevant tourist in the Conference Assistant. We showed how the
information. The system supported community activity Conference Assistant made use of a wide variety of
by allowing attendees to search for others with similar context, both personal and environmental, and how it
interests. Context was limited to “virtual information” enhanced user interactions with both the environment and
such as personal interests, not taking into account “real other users. We discussed the important concepts that our
information” like location. architecture supports, that make it easier to build and
Somewhat similar to a conference assistant is a tour modify complex context-aware applications: separation of
guide. Both applications provide relevant information sensing and using context, aggregation, and abstraction.
about the user’s current context. The context-aware tour The Conference Assistant is currently a prototype
guide application is, perhaps, the canonical context-aware application running in our laboratory. We would like to
application. It has been the focus of much effort by groups deploy the application at an actual conference. This would
doing context-aware research. Feiner et al. developed a require us to provide many handheld devices (in case the
tour guide for the Columbia University campus that conference attendees do not have their own wearable
combined augmented reality with mobile computing [7]. computers), a wireless LAN, and an indoor positioning
Fels et al. and Long et al. built tour guide applications system. This would allow us to perform a realistic
for visitors attending an open house at their respective evaluation of the application.
laboratories [8,2]. These systems use static configurations There are also additional features that we would like to
and can not deal with changes to tours at runtime. In add to the Conference Assistant. The first is the addition
contrast, the context aware architecture used in the of an improved assistant for demonstrations. Currently,
Conference Assistant is able to make runtime changes the application doesn’t treat paper presentations any
transparent to the application. differently from demonstrations. We would like to
There have been a number of systems that support enhance the application when a demonstration is being
individual users in taking notes on presentations. The given, by providing additional information about the
Classroom 2000 project used an augmented classroom demonstration. This includes relevant web pages, research
that captured audio, video, web slides visited and papers, and videos.
whiteboard activity to make the student notetaking Currently, the Conference Assistant only uses
activity easier [1]. The NotePals system aggregated the information about PowerPoint slides and Web pages
notes from several notetakers to provide a single group being presented. We would like to extend this to use
record of a presentation [4]. Stifelman built an augmented other presentation packages and mediums. This will
paper notebook that allowed access to the audio of a require no change to the application, but will require the
presentation during review [17]. The context most development of additional context widgets to capture the
extensively used in these applications is time. The presentations and relevant updates.
Conference Assistant expands the range of context used. Other features to add deal with access to information
about the user’s colleagues. Presently, at registration,
users indicate the colleagues about whom they would like [8] S. Fels et al., “Progress of C-MAP: A context-aware mobile
to receive information on. This is actually the opposite of assistant”, in Proceeding of AAAI 1998 Spring Symposium on
Intelligent Environments, Technical Report SS-98-02, March 1998, pp.
how we should be approaching this problem, from a 60-67.
privacy point of view. At registration, users should
actually indicate who is allowed to access their [9] M. Lamming and M. Flynn, “Forget-me-not: Intimate computing in
information (location and level of interest). This allows support of human memory”, in Proceedings of FRIEND21:
International Symposium on Next Generation Human Interfaces, 1994,
users to manage their own information. A related feature is pp. 125-128.
to allow users to access their colleagues’ notes with the
retrieval application. This would provide additional [10] Y. Nishibe et al., “Mobile digital assistants for community
information that would augment the user’s own notes on support”, AAAI Magazine 19(2), Summer 1998, pp. 31-49.
a presentation and would be a source of notes for
[11] J. Pascoe, “Adding generic contextual capabilities to wearable
presentations that the user did not attend. computers”, in Proceedings of 2
nd
International Symposium on
A final feature to add to the Context Assistant is an Wearable Computers, October 1998, pp. 92-99.
interface that supports serendipitous information retrieval
relevant to the current presentation, much like the [12] J. Rekimoto, Y. Ayatsuka and K. Hayashi, “Augment-able reality:
Situated communications through physical and digital spaces”, in
Remembrance Agent [13]. Potential information to Proceedings of the 2 nd International Symposium on Wearable
retrieve includes conference and field-relevant information. Computers, October 1998, pp. 68-75.
We would like to add a third retrieval application for
conference organizers. This application would allow them [13] B. Rhodes, “The Wearable Remembrance Agent: A system for
augmented memory”, in Proceedings of the 1st International Symposium
to view anonymized information about the number of on Wearable Computers, October 1997, pp. 123-128.
people at various presentations and demonstrations and
the average amount of time attendees spent at each. [14] D. Salber, A.K. Dey and G.D. Abowd, “The Context Toolkit:
Aiding the development of context-enabled applications”, in
Proceedings of CHI’99, pp. 434-441.
Acknowledgements
[15] U. Shardanand and P. Maes, "Social information filtering:
We would like to acknowledge the support of the algorithms for automating 'word of mouth'", in Proceedings of CHI’95,
May 1995, pp. 210-217.
Future Computing Environments research group for
contributing to the ideas of the Conference Assistant. We [16] T. Starner, D. Kirsch and S. Assefa, “The Locust Swarm: An
would like to thank Thad Starner for his comments on environmentally-powered, networkless location and messaging
this work. This work was supported in part by an NSF system”, in Proceedings of the 1st International Symposium on
CAREER Grant # 9703384, NSF ESS Grant EIA- Wearable Computers, October 1997, pp. 169-170.
9806822, a Motorola UPR and a Hitachi grant. [17] L.J. Stifelman, “Augmenting real-world objects: A paper-based
audio notebook”, in Proceedings of CHI’96, April 1996, pp. 199-200.
References
[1] G.D. Abowd et al., “Investigating the capture, integration and
access problem of ubiquitous computing in an educational setting”, in
Proceedings of CHI’98, April 1998, pp. 440-447.
[2] G.D. Abowd, C.G. Atkeson, J. Hong, S. Long, R. Kooper and M.
Pinkerton, “Cyberguide: A mobile context-aware tour guide”, ACM
Wireless Networks, 3(5), 1997, pp. 421-433.
[3] H.H. Clark and S.E. Brennan, “Grounding in communication”, in
L.B. Resnick, J. Levine, & S.D. Teasley (Eds.), Perspectives on
socially shared cognition. Washington, DC. 1991.
[4] R. Davis et al., “NotePals: Lightweight note sharing by the group,
for the group”, in Proceedings of CHI’99, May 1999, pp. 338-345.
[5] A.K. Dey, D. Salber, M. Futakawa and G. Abowd, “An
architecture to support context-aware applications”, submitted to UIST
’99.
[6] A.K. Dey and G.D. Abowd, “Towards an understanding of context
and context-awareness”, submitted to HUC ’99.
[7] S. Feiner, B. MacIntyre, T. Hollerer and A. Webster, “A Touring
Machine: Prototyping 3D mobile augmented reality systems for
exploring the urban environment”, in Proceedings of the 1st
International Symposium on Wearable Computers, October 1997, pp.
74-81.