Contact

uran.oh@ewha.ac.kr

URAN OH

Assistant Professor

Department of Computer Science and Engineering

Ewha Womans University 

gallery/chosen_resize

오유란

Eyes-free text entry with wearable sensors

adaptive app launcher for smartwatches

Supporting Instance navigation of photos for people with visual impairments 

nonvisual guidance for assisting spatial tasks in 3D space with six degrees of freedom

gallery/6dof

The Motivation

Text-entry can sometimes be not always-available or efficient, especially in mobile contexts where users have to constantly monitor their surrondings for their safety (e.g., avoid bumping into people or obstacles). 

 

The Goal and Expected Contributions

Designing and implementiong a wearable device with finger-, or wrist-mounted sensors such as inertial motion unit or electromyography (EMG) sensors. This device should enables users to enter texts without visual feedback.

 

 

The Motivation

Object classification/localization or scene summarization has been an on-going research topic in computer vision for many years. While these can benefit people with visual impairments to have better access to visual contents of an image, it is still challenging for them to fully understand the scene, which may prevents many of them from being socially engaged with friends [ref]. 

 

The Goal and Expected Contributions

Developing a system that enables users with visual impairments to gain better understanding of a complex scene with multiple instance by allowing users to spatially explore each instance in the scene by touch. The system can improve the accessibility of images by providing visual information of images in detail. 

 

The Motivation

With the advancement of technologies in augmented or virtual reality (AR/VR) research, the interaction such as navigation or object manipulation is no longer limited to two dimension. However, the assitance for supporting interactions in 3-dimensional space for people with visual impairments has not been well-explored, especially when the degree of freedom is high to be delievered to users at once. 

 

The Goal and Expected Contributions

Designing and implementing a system that conveys spatial information of the surrondings in 3D space with nonvisual feedback with minimum cognitive loads. This system may be used for assisting object localization (i.e., helping a blind person to reach to a specific object) and photography for people with visual impairments. 

The Motivation

Selecting a target from a collection of items on a smartwatch is a frequent yet challenging task. The constrained screen real estate can only accommodate a small number of items to be large enough for finger tips. As a result, users often need to search through a long list of items or navigate UI hierarchies to find and select a target. 
 

The Goal and Expected Contributions

Developing an app launcher for a smartwatch which adaptively changes the layout of the apps or the size of the app icons based their app launch likelihoods. This would enable users to find and open a desired app with efficiency.      

A figure from <http://cs231n.stanford.edu/slides/2016/winter1516_lecture8.pdf>
gallery/bubblecloudlauncher
An example figure taken from and Android app called Bubble Cloud Wear
A example figure from TAP wearable keyboard