Deepak Pathak

Biography

Deepak Pathak is a faculty in the School of Computer Science at Carnegie Mellon University. He received his Ph.D. from UC Berkeley and his research spans computer vision, machine learning, and robotics. He is a recipient of the faculty awards from Google, Sony, GoodAI, Samsung, and graduate fellowship awards from Facebook, NVIDIA, Snapchat. His research has been featured in popular press outlets, including The Economist, The Wall Street Journal, Quanta Magazine, Washington Post, CNET, Wired, and MIT Technology Review among others. Deepak received his Bachelor’s from IIT Kanpur with a Gold Medal in Computer Science. He co-founded VisageMap Inc. later acquired by FaceFirst Inc. For details: https://www.cs.cmu.edu/~dpathak/

Talk

Continually Improving Robots: Unsupervised Exploration and Rapid Adaptation

How can we train a robot that can generalize to perform thousands of tasks in thousands of environments? This question underscores the holy grail of robot learning research dominated by learning from demonstrations or reward-based learning. However, both of these paradigms fall short because it is difficult to supervise an agent for all possible situations it can encounter in the future. We posit that such an ability is only possible if the robot can learn continually and adapt rapidly to new situations. Unsupervised exploration provides a means to autonomously and continually discover new tasks and acquire intelligent behavior without relying on any experts. However, just discovering new skills is not enough, the agent needs to adapt them to each new environment in an online manner. In this talk, I will first talk about our early efforts in this direction by decoupling this general goal into two sub-problems: 1) continually discovering new tasks in the same environment, and 2) generalization to new environments for the same task. I will discuss how these sub-problems can be combined to build a framework for general-purpose embodied intelligence. Throughout the talk, I will present several results from case studies of real-world robot learning including legged robots walking in diverse unseen terrains, robotic arm performing a range of unseen diverse manipulation tasks in a zero-shot manner, and robots that are able to write on a white-board from visual input.

back to top