I am an assistant professor in the Department of Information and Communication Engineering at DGIST. My research interests are focused on combining artificial intelligence techniques and HCI methodologies to harness efficient and effective human-computer collaboration. I manage DIAG (DGIST Intelligence Augmentation Group) where we study and build hybrid intelligence systems — combining human and machine intelligence to understand and overcome challenging computational problems.

Keywords: #Human Computation; #Crowdsourcing; #Human-Computer Interaction; #Artificial Intelligence


Conference and Journal Papers

Posters, Demos, and Workshop Papers



Robust 4D Simulation of Rare Events enabled by Human-Augmented Computer Vision

Research in robotics and autonomous vehicles suffers from a lack of realistic training data and environments in which to test new approaches. Rare and unusual events such as traffic accidents occur several orders of magnitude less frequently than is needed to collect large enough training and testing sets, presenting a fundamental bottleneck in the research and deployment of such systems. Thus, we propose to use a crowdsourced human-­in-­the-­loop approach to guide computer vision algorithms to extract measurement information from large video corpora, allowing us to create simulations of scene dynamics for training and testing.

Crowdsourcing Emotion, Intention, and Context Annotations from Dialog Videos

Dialog videos contain rich contextual, emotional, and intentional cues of the characters and their surroundings. In this project, we aim to build a crowdsourcing platform that collects these information from a large dialog video dataset. The collection and aggregation process can be challenging because the temporal dimension of the dataset has to be considered, and the labels can be highly subjective. We combat these challenges by exploring crowdsourcing techniques to design workflows and answer aggregation methods that efficiently collects multi-dimensional labels and overcome the subjective nature of the collected annotations.

Improving Aggregate Crowd Performance on Crowd-Assisted Image Segmentation

In designing crowdsourcing tasks, we want to achieve as high accuracy as possible from the given resources. In this work, we introduced an approach of leveraging tool diversity as a means of improving aggregate crowd performance. We define tool diversity as a property of a system (or a task), that enables to use different tools for a same task. In semantic image segmentation tasks, we show that our approach improves the aggregate accuracy significantly, compared to using a single best tool alone.