Machine learning and control theory have made substantial advances in the field of robotics in the past decade. However, there are still many challenges remaining when studying robots that interact with humans. This includes autonomous vehicles that interact with people, service robots working with their users at homes, assistive robots helping disabled bodies, or humans interacting with drones or other autonomous agents in their daily lives. These challenges introduce an opportunity for developing new learning and control algorithms to enable safe and efficient interactive autonomy.
In this talk, I will discuss a journey in formalizing human-robot interaction. Specifically, I will first discuss developing data-efficient techniques to learn computational models of human behavior. I will continue with the challenges that arise when agents (including humans and robots) interact with each other. Further, I will argue that in many applications, a full computational human model is not necessary for seamless and efficient interaction. Instead, in many collaborative tasks, conventions —low-dimensional shared representations of tasks — is sufficient for capturing the interaction between agents. I will conclude the talk with challenges around adapting conventions in human-robot applications such as assistive teleoperation and collaborative transport.