I am a PhD student in the Department of Electrical Engineering and Computer Science (EECS) at the University of Michigan, Ann Arbor.
I am advised by Prof. Walter S. Lasecki and am a member of CRO+MA Lab.
I am currently at KAIST as a visiting student in KIXLAB.
My research interest spans human-computer interaction (HCI), crowdsourcing, machine learning, and digital image processing.
- J.Y. Song, R. Fok, F. Yang, K. Wang, A.R. Lundgard, W.S. Lasecki.
as a Means of Improving Aggregate Crowd Performance on Image Segmentation Tasks.
In HCOMP Workshop on Human Computation for Image and Video Analysis (GroupSight 2017). Quebec City,
- S. Gouravajhala, J.Y. Song, J. Yim, R. Fok, Y. Huang, F. Yang, K. Wang, Y. An, W.S. Lasecki.
Intelligence for Robotics. In Collective Intelligence Conference (CI 2017). New York, NY.
Crowd-based Image Segmentation
In-home robots interact with a wide range of objects on a daily basis, but automatic object segmentation and
recognition in visual scenes remain open problems, particularly in scenarios where robots may encounter
never-before-seen objects (e.g. when someone purchases a new item for their home or office). We introduce a hybrid
intelligence system for real-time 2D image segmentation of never-before-seen objects. we introduce a novel
crowdsourcing workflow that leverages multiple tools for the same task to increase accuracy by balancing systematic
error biases introduced by tools themselves. We show that the expectation maximization method we use to combine
answers from multiple tools produces at least as good as the best performing individual tool even if the gold standard
We are building crowdsourcing tools to help autonomous robots recognize new contexts or problems in real-time.
Our system uses a hybrid intelligent workflow that combines human intelligence from the crowd with automated
support in the form of focused tasks (ones that the system is not able to complete on its own) and smart tools
for aiding object segmentation. This uses the machine’s ability to precisely select content with people’s semantic
understanding of the scene. It also allows us to benefit from as much automated labeling as can be done reliably,
while using human intelligence to both fill in the gaps, and ensure that new objects in a scene do not result
in failures to complete an assigned task.
Intermodal Non-rigid Image Registration
I've also collaborated with Prof. (Emeritus) Charles R. Meyer in the Department of Biomedical Engineering and Prof. Jeffrey A. Fessler in EECS.
I worked on image registration stuff such as intermodal non-rigid image registration based on mutual information, and 2D-3D projection image registration.
- Jean Y. Song and Charles R. Meyer, 2D-3D Image Registration using Thin-Plate Spline and Volume Rendering,
SPIE Medical Imaging 2015, Orlando, Feb. 2015.
- Jean Y. Song, J. A. Fessler, and C. R. Meyer, Adaptive Filtering on Conditional Mutual Information for
Intermodal Non-Rigid Image Registration, IEEE NSS/MIC 2014, Seattle, Nov. 2014.