Valerie Henderson Summet

me in cyprus

Sea Caves and Cape Greco
Ayia Napa, Cyprus
Dec. 2003
CS Education | Mobile Computing | CopyCat | Wayfinding and Navigation | Wearable Interfaces

Computer Science Education:

Publications:

Mobile Computing in the Deaf Community:

Duing my work with the Deaf community, I discovered that mobile electronic communication (such as text messaging, instant messaging, SMS, mobile email, etc) has had a large impact on the Deaf community. For the Deaf, these forms of communication can be very similar to the voice capabilities of cell phones for the hearing. It is an always on, pervasive method of quick communication. There have been pagers that have been marketed specifically to the Deaf community (e.g. Wyndtel), but no studies have been done to actually examine the differences in communication. How do the Deaf use text messaging or instant messaging and how is it similar and different to hearing people's usage?

During the Spring of 2005, I conducted a study to survey Deaf teenagers at Atlanta Area School for the Deaf (AASD) to find out their current communication styles and practices. This study used qualitative techniques such as social network maps, diary logs, and semi-structured interviews to broadly examine the space. We began to answer the basic questions of who, what, where, when, and how to understand Deaf teenage use of electronic, mobile communication technologies.

Publications:
Electronic Communication: Themes from a Case Study of the Deaf Community
Valerie Henderson-Summet, Rebecca E. Grinter, Jennie Carroll, and Thad Starner.
Proceedings of Interact '07. Rio de Janeiro, Brazil. Sept. 2007.

Electronic Communication by Deaf Teenagers.
Valerie Henderson, Rebecca Grinter, and Thad Starner.
Technical Report GIT-GVU-05-34. Georgia Institute of Technology, College of Computing, GVU Center. October 2005.

CopyCat: Education Game for Deaf Children

Using the Contextual Computing Group's expertise in gesture and American Sign Language (ASL) recognition, this project sought to build a compelling application for this technology as well as further the recognition technology itself. Working with Harley Hamilton at the Atlanta Area School for the Deaf (AASD), we designed an interactive game geared for language development for deaf children. The concept is that the children will direct the on-screen action through sign language, with the computer-based recognition system recognizing their sign and taking appropriate action based on the correctness (or incorrectness) of the child's signing.

However, the recognition engine is a large undertaking and involves large amounts of data to improve algorithms and models. Until the gesture recognition system was appropriately robust, we used Wizard-of-Oz techniques to collect data from native signers as well as involve the children in participatory design of the game. This helped us maximize our iterative design process.

Publications:

American Sign Language Recognition in Game Development for Deaf Children
Helene Brashear, Valerie Henderson, Kwang-Hyun Park, Harley Hamilton, Seungyon Lee, and Thad Starner
Proceedings of ASSETS. Portland, OR. October 2006.

Development of an American Sign Language Game for Deaf Children.
Valerie Henderson, Seungyon Lee, Helene Brashear, Harley Hamilton, Thad Starner, and Steven Hamilton.
Proceedings of Interaction Design and Children. Boulder, CO. June 2005.

User-Centered Development of Gesture-Based American Sign Language Game.
Seungyon Lee, Thad Starner, Valerie Henderson, Helene Brashear, and Harley Hamilton.
Proceedings of the International Symposium on Instructional Technology and Education of the Deaf. Rochester, NY. June 2005.

CopyCat: An ASL Game for Deaf Children
Seungyon Lee, Valerie Henderson, and Helene Brashear.
RESNA Student Design Competition. Atlanta, GA. June 2005.

A Gesture Based American Sign Language Game for Deaf Children.
Seungyon Lee, Valerie Henderson, Harley Hamilton, Helene Brashear, Thad Starner, and Steven Hamilton.
CHI '05 Extended Abstracts, pp 1589-1592. Portland, OR. April 2005.

Navigation and Wayfinding:

During in the Summer of 2004, I worked at the Veteran's Hospital in Decatur, GA in their Rehabilitation Research Department. I helped conduct a user study evaluating "Cyber Crumbs." The goal was to develop a means of "salting" indoor structures with a network of inexpensive information "crumbs" to assist persons in easily orientating to indoor structures and efficiently navigating through them to desired locations. Infra-Red (IR) communications between Crumbs and a User Badge provide a channel for obtaining information. A low-profile bone-conduction headset provides voiced output of information. Seventeen visually impaired participants tested this system and showed significant improvements in navigation efficiency as compared with the performance of ten sighted participants.

People:
David Ross

Publications:
Cyber Crumbs: An Indoor Orientation and Wayfinding Infrastructure.
David Ross, Alexander Lightman, and Valerie Henderson.
Proceedings of RESNA. Atlanta, GA. June 2005.

Mobile Sign Language Recognition

The goal of this project is to offer a sign recognition system as another choice of augmenting communication between the Deaf and hearing communities. We seek to implement a mobile, self-contained system that a Deaf user could use as a limited phrasebook. This wearable system would capture and recognize the Deaf user's signing. The user could then cue the wearable system to generate speech for the hearing listener.

As this is designed to be a wearable system, the display is a small, head-up-display with a resolution of 640x480. This calls for a different class of interface, and I was responsible for the prototyping of the interface. This posed two challenges:

  1. Designing an interface for an English as a second language population. Native signers "speak" ASL, not English. Thus, as a designer, I had to make choices about the output of the system. Would English sentences, icons, ASL-English, or some other output language be most appropriate? My current research interests actually grew out of this question.
  2. Designing an interface for a wearable system. The head-up display of the wearable system presents more constraints than just the size. The head-up display does not require the two-hands, two-eyes, full attention paradigm which desktop of laptop computing does. Wearable computing displays require minimal attention from the user. The user's focus is constantly changing and may be fixed on many other things in the environment.

Publications: Towards a One-Way American Sign Language Translator.
R.M. McGuire, J. Hernandez-Rebollar, T. Starner, V. Henderson, H. Brashear, and D.S. Ross.
Proceedings of Sixth IEEE Conference on Automatic Face and Gesture Recognition, pp 620-625. May 2004.