Robots in the wild are those that are operating in unseen, complex, challenging environments under new environmental conditions. Most of the time, robot policies and controllers, and even self-sensing (i.e., what is my current status) fail when taken from a controlled environment into the real world.
This research focuses on a fundamentally learning skills either on the fly (online reinforcement), or by leveraging past experiences (imitation and transfer) to as-quickly-as-possible respond to new, unseen environments. This includes robots that create a model of their motions as they move, to ones that solve for controllers or policies online as more data is collected in their new environments. This research also involves transferring knowledge between reality and simulation, where reinforcement learning can be used to guide exploration of the robot's capabilities while exploiting what they have already learned about themselves and their environments.
This research topic is generally open-ended by nature, as robots that monitor their own behavior, that explore new motions, and that adapt to environment variables has broad applications.
Advances in Neural Information Processing Systems (NeurIPS), 36, pp. 13590-13612 (2024).
A.H.Qureshi, J.J. Johnson, Y. Qin, T. Henderson, B. Boots, M.C. Yip
A.H.Qureshi, B. Boots, M.C. Yip
F. Richter, R. K. Orosco, M.C. Yip