My research interests are focused on developing crowd-powered systems for intelligent computer vision that allows machines to recognize objects, understand scenes, and therefore interact with the world.
Keywords: #Human-Computer Interaction; #Crowdsourcing; #Computer Vision; #Artificial Intelligence
- J.Y. Song, S.J. Lemmer, X. Liu, S. Si, J. Kim, J.J. Corso, and W.S. Lasecki. Popup: Reconstructing 3D Video Using Particle Filtering to Aggregate Crowd Responses. In Proceedings of the ACM International Conference on Intelligent User Interfaces (IUI 2019). Los Angeles, CA. [25% Acceptance Rate]
- J.Y. Song, R. Fok, J. Kim, W.S. Lasecki. FourEyes: Leveraging Tool Diversity as a Means to Improve Aggregate Accuracy in Crowdsourcing. In ACM Transactions on Interactive Intelligent Systems (TiiS). 2018.
- J.Y. Song, R. Fok, A. Lundgard, F. Yang, J. Kim, W.S. Lasecki. Two Tools are Better Than One: Tool Diversity as a Means of Improving Aggregate Crowd Performance. In Proceedings of the ACM International Conference on Intelligent User Interfaces (IUI 2018). Tokyo, Japan. [23% Acceptance Rate] Best Student Paper Honorable Mention
- J.Y. Song, R. Fok, F. Yang, K. Wang, A.R. Lundgard, W.S. Lasecki. Tool Diversity as a Means of Improving Aggregate Crowd Performance on Image Segmentation Tasks. In HCOMP Workshop on Human Computation for Image and Video Analysis (GroupSight 2017). Quebec City, Quebec. 2017.
- S. Gouravajhala, J.Y. Song, J. Yim, R. Fok, Y. Huang, F. Yang, K. Wang, Y. An, W.S. Lasecki. Towards Hybrid Intelligence for Robotics. In Collective Intelligence Conference (CI 2017). New York, NY.
- Jean Y. Song and Charles R. Meyer, 2D-3D Image Registration using Thin-Plate Spline and Volume Rendering SPIE Medical Imaging 2015, Orlando, Feb. 2015.
- Jean Y. Song, J. A. Fessler, and C. R. Meyer, Adaptive Filtering on Conditional Mutual Information for Intermodal Non-Rigid Image Registration IEEE NSS/MIC 2014, Seattle, Nov. 2014.
Research in robotics and autonomous vehicles suffers from a lack of realistic training data and environments in which to test new approaches. Rare and unusual events such as traffic accidents occur several orders of magnitude less frequently than is needed to collect large enough training and testing sets, presenting a fundamental bottleneck in the research and deployment of such systems. Thus, we propose to use a crowdsourced human-in-the-loop approach to guide computer vision algorithms to extract measurement information from large video corpora, allowing us to create simulations of scene dynamics for training and testing.
Dialog videos contain rich contextual, emotional, and intentional cues of the characters and their surroundings. In this project, we aim to build a crowdsourcing platform that collects these information from a large dialog video dataset. The collection and aggregation process can be challenging because the temporal dimension of the dataset has to be considered, and the labels can be highly subjective. We combat these challenges by exploring crowdsourcing techniques to design workflows and answer aggregation methods that efficiently collects multi-dimensional labels and overcome the subjective nature of the collected annotations.
In designing crowdsourcing tasks, we want to achieve as high accuracy as possible from the given resources. In this work, we introduced an approach of leveraging tool diversity as a means of improving aggregate crowd performance. We define tool diversity as a property of a system (or a task), that enables to use different tools for a same task. In semantic image segmentation tasks, we show that our approach improves the aggregate accuracy significantly, compared to using a single best tool alone.
We are building crowdsourcing tools to help autonomous robots recognize new contexts or problems in real-time. Our system uses a hybrid intelligent workflow that combines human intelligence from the crowd with automated support in the form of focused tasks (ones that the system is not able to complete on its own) and smart tools for aiding object segmentation.