Contact

uoh@andrew.cmu.edu

URAN OH

Postdoctoral Research Fellow

Robotics Institute, School of Computer Science

Carnegie Mellon University 

The Motivation

The vast majority of work on understanding and supporting the gesture creation process has focused on professional designers. In contrast, gesture customization by end users— which may offer better memorability, efficiency and accessibility than pre-defined gestures—has received little attention.

 

Related Publication(s) 

  • The Challenges and Potential of End-User Gesture Customization [PDF

end-user touchscreen gestuers customization

gallery/end-user

Touchscreen gestures sonification

gallery/sonifications

The Motivation

While sighted users may learn to perform touchscreen gestures through observation (e.g., of other users or video tutorials), such mechanisms are inaccessible for users with visual impairments. As a result, learning to perform gestures without visual feedback can be challenging. 

 

Related Publication(s) 

  • Follow That Sound: Using Sonification and Corrective Verbal Feedback to Teach Touchscreen Gestures [PDF

  • Audio-Based Feedback Techniques for Teaching Touchscreen Gestures [PDF

mobile and Wearable device use for people with visual impairments

gallery/wearable

The Motivation

With the increasing popularity of mainstream wearable devices, it is critical to assess the accessibility implications of such technologies. For people with visual impairments, who do not always need the visual display of a mobile phone, alternative means of eyes-free wearable interaction are particularly appealing. 

 

Related Publication(s) 

  • Current and Future Mobile and Wearable Device Use by People With Visual Impairments [PDF

reading assistance via a finger-mounted device 

Nonvisual on-body Interaction

navcog: Indoor navigation assistance for people with visual impairments

personal object recognizer for people with visual impairments

gallery/ocr
gallery/on-body
gallery/navcog
gallery/camera

The Motivation

The recent miniaturization of cameras has enabled finger-based reading approaches that provide blind and visually impaired readers with access to printed materials. Compared to handheld text scanners such as mobile phone applications, mounting a tiny camera on the user’s own finger has the potential to mitigate camera framing issues, enable a blind reader to better understand the spatial layout of a document, and provide better control over reading pace. 

 

Related Publication(s) 

  • The Design and Preliminary Evaluation of Finger-Mounted Camera and Feedback System to Enable Reading of Printed Text for the Blind  [PDF

  • Evaluating Haptic and Auditory Directional Guidance to Assist Blind People in Reading Printed Text Using Finger-Mounted Cameras [PDF

The Motivation

When in an unfamiliar place, people tend to use a walking navigation system on their device to compare the map location to the surrounding views. However, visually impaired people cannot check the map or the surrounding scenery to bridge the gap between the ground truth and the rough GPS location. NavCog aims for an improved high-accuracy walking navigation system that uses BLE beacons together with various kinds of sensors with a new localization algorithm for both indoors and outdoors. [A link to the project]

 

Related Publication(s)

  • Ahmetovic, D., Gleason, C., Ruan, C., Kitani, K., Takagi, H., & Asakawa, C. (2016, September). NavCog: a navigational cognitive assistant for the blind. MobileHCI'16 (pp. 90-99). ACM. [PDF]

 

The Motivation

Blind people often need to identify objects around them, from packages of food to items of clothing. Automatic object recog- nition continues to provide limited assistance in such tasks because models tend to be trained on images taken by sighted people with different background clutter, scale, viewpoints, occlusion, and image quality than in photos taken by blind users.

 

Related Publication(s)

  • Kacorri, H., Kitani, K. M., Bigham, J. P., & Asakawa, C. (2017, May). People with Visual Impairment Training Personal Object Recognizers: Feasibility and Challenges. CHI'17 (pp. 5839-5849). ACM. [PDF]. 

The Motivation

For users with visual impairments, who do not necessarily need the visual display of a mobile device, non-visual on-body interaction (e.g., Imaginary Interfaces) could provide accessible input in a mobile context. Such interaction provides the potential advantages of an always-available input surface, and increased tactile and proprioceptive feedback compared to a smooth touchscreen.

 

Related Publication(s) 

  • Design of and Subjective Response to on-Body Input for People With Visual Impairments [PDF

  • A Performance Comparison of on-Hand Versus on-Phone Nonvisual Input by Blind and Sighted Users [PDF]

  • Localization of Skin Features on the Hand and Wrist from Small Image Patches [PDF]