Intelligent agents acting in the real world need to perceive, learn from, reason about and interact with their environment. In this talk, I will explore the role that humans play in the design and deployment of computer vision systems. First, large-scale manually labeled datasets have proven instrumental for scaling up visual recognition, but they come at a substantial human cost. I will talk about strategies for making optimal use of human annotation effort for computer vision progress. However, no dataset can foresee all the visual scenarios that a real-world system might encounter. I will argue that seamlessly integrating in human expertise at runtime will become increasingly important for open-world computer vision. I will talk about both mathematical frameworks for human-machine collaboration as well as deep reinforcement learning models that open up new avenues for human-in-the-loop exploration.
Olga Russakovsky is a postdoctoral fellow at Carnegie Mellon University. She recently completed her PhD in computer science at Stanford advised by Prof. Fei-Fei Li. Her research is in computer vision, closely integrated with machine learning and human-computer interaction. She led the ImageNet Large Scale Visual Recognition Challenge effort for two years, served as a Senior Program Committee member for WACV’16, and organized multiple workshops and tutorials at premier computer vision conferences. She founded and directs the Stanford AI Laboratory’s outreach camp SAILORS designed to expose high school students in underrepresented populations to the field of AI, and helped pioneer the first “Women in Computer Vision” workshop at CVPR’15.