Captioning on Glass (CoG)

Captioning on Glass (CoG) provides real-time captioning, allowing the deaf and hard of hearing to converse with others. For more information, visit the project website at http://cog.gatech.eduRead more

Mobile Music Touch (MMT)

We present Mobile Music Touch, a wearable, wireless haptic piano instruction system, composed of (1) five small vibration motors, one for each finger, fitted inside a glove, (2) a Bluetooth module mounted on the glove, and (3) piano music output from a laptop. Users hear the piano music and feel the vibrations indicating which finger is used to play the note. We investigate the system’s potential for passive learning, i.e. learning piano playing automatically while engaged in everyday activities, as well as the opportunities to use this system for rehabilitation .Read more

MAGIC

Gestures for interfaces should be short, pleasing, intuitive, and easily recognized by a computer. However, it is a challenge for interface designers to create gestures easily distinguishable from users’ normal movements. Our tool MAGIC Summoning addresses this problem. Given a specific platform and task, we gather a large database of unlabeled sensor data captured in the environments in which the system will be used (an “Everyday Gesture Library” or EGL). The EGL is quantized and indexed via multi-dimensional Symbolic Aggregate approXimation (SAX) to enable quick searching. MAGIC exploits the SAX representation of the EGL to suggest gestures with a low likelihood of false triggering. Suggested gestures are ordered according to brevity and simplicity, freeing the interface designer to focus on...Read more

CHAT

Research in dolphin cognition and communication in the wild is still a challenging task for marine biologists. Most problems arise from the uncontrolled nature of field studies and the challenges of building suitable underwater research equipment. We develop a novel underwater wearable computer enabling researchers to engage in an audio-based interaction between humans and dolphins. The design requirements are based on a research protocol developed by a team of marine biologists associated with the Wild Dolphin Project. Furthermore, we work on discovering and indexing dolphin whistles recorded in the wild.Read more

T1DMITRI

As a member of the iDASH project (integrating Data for Analysis, Anonymization, and SHaring), Dr. Heintzman is lead for DMITRI 1.0 (Diabetes Management Integrated Technology Research Initiative). The DMITRI project currently has a daily life, diabetes management data set from 16 subjects with diabetes over 72-96 hours. It tracks data from wearable medical equipment, personal logs, nutritional logs, clinical history data, and questionnaires. The DMITRI datasets are shared through the iDASH portal at the National Center for Biomedical Computing at UCSD. This dataset is unique in that it combines an extensive amount of on-body monitoring (insulin pump dosage logs; Dexcom continuous glucose monitor; SenseWear activity monitor with accelerometer, GSR, and skin temperature sensing; Polar heart monitor; Philips Actiwatch; and Zeo...Read more

CopyCat

CopyCat is designed both as a platform to collect gesture data for our ASL recognition system and as a practical application which helps deaf children develop working memory and language skills while they play the game. Please view the video below The system uses a video camera and wrist mounted accelerometers as the primary sensors. In CopyCat, the children use ASL to communicate to the heroine of the game, Iris the cat. For example, the child will sign to Iris, "ALLIGATOR ON CHAIR" (glossed from ASL). If the child signs poorly, Iris looks puzzled, and the child is encouraged to attempt the phrase again. If the child signs clearly, Iris "poofs" the villain and continues on her way. If the...Read more

GART

The Gesture and Activity Recognition Toolkit (GART) (formerly Georgia Tech Gesture Toolkit) is a toolkit to allow for rapid prototyping of gesture-based applications. There are two versions of the toolkit, a Linux shell scripting based version and the more refined Java version. About GART GART Releases A short video demo of Gesture Watch that used GART Roadmap Team GART Support Installation instructions GART Manual - includes information on the comm and vision modules, as well as the WritingPad, PinkCup, and AccelGestures examples (note that the information involving Maven is outdated) Tutorials API Source code and bug tracking(trac) Additional resources Hidden Markov Model Toolkit Homepage Better UI for Weka Internal wiki page for GART Other Information Linux-Shell GT2K Home Page (older...Read more

Telesign

Telesign is a system designed for Deaf adults attempting to carry out service transactions with a hearing person such as visiting the veterinarian or getting their oil changed. American Sign Language (ASL) is the native language of the Deaf in the United States. ASL has a completely different grammar than English. This makes it difficult for a Deaf person to communicate in English with a hearing person. Traditional methods of communication between the Deaf and hearing are writing on paper and passing it back and forth or typing onto a Sidekick-like device and showing that. Both of these methods require the Deaf individual to create grammatically correct or at least semantically understandable English phrases which may be difficult. Telesign is...Read more

SMARTSign

The purpose of SMARTSign is to help hearing parents of deaf children learn American Sign Language (ASL). Ninety percent of deaf children are born to hearing parents and are less likely to receive exposure to language before they are of school age. This lack of language exposure can have severe consequences throughout their life.Read more

Mobiphos

Mobiphos is an application designed to run on digital cameras that supports the automatic sharing of photographs among members of a collocated group who are engaged in a social activity. Mobiphos allows users to easily take pictures, browse thumbnails of those pictures and share their photos within a collocated group of people in real-time. When a person takes a photograph using Mobiphos, that picture is automatically shared with every member of the collocated group. At the same time, she is able to view a constantly updating stream of picture thumbnails scrolling across her screen as they are being captured and shared with her by her fellow group members. From the user’s perspective, all of the photographs captured by the group...Read more

Pages

Subscribe to Contextual Computing Group RSS