Learning Better Ways to Measure and Move: Joint Optimization of an Agent’s Physical Design and Computational Reasoning


May 20, 2022 1:00 PM — 2:00 PM remotely via Zoom


CDM 222 and online


Matthew R. Walter, Toyota Technological Institute of Chicago, USA


The recent surge of progress in machine learning foreshadows the advent of sophisticated intelligent devices and agents capable of rich interactions with the physical world. Many of these advances focus on building better computational methods for inference and control—computational reasoning methods trained to discover and exploit the statistical structure and relationships in their problem domain. However, the design of physical interfaces through which a machine senses and acts in its environment is as critical to its success as the efficacy of its computational reasoning. Perception problems become easier when sensors provide measurements that are more informative towards the quantities to be inferred. Control policies become more effective when an agent’s physical design permits greater robustness and dexterity in its actions. Thus, the problems of physical design and computational reasoning are coupled, and the answer to what combination is optimal naturally depends on the environment the machine operates in and the task before it.

I will present learning-based methods that perform automated, data-driven optimization over sensor measurement strategies and physical configurations jointly with computational inference and control. I will first describe a framework that reasons over the configuration of sensor networks in conjunction with the corresponding algorithm that infers spatial phenomena from noisy sensor readings. Key to the framework is encoding sensor network design as a differential neural layer that interfaces with a neural network for inference, allowing for joint optimization using standard techniques for training neural networks. Next, I will present a method that draws on the success of data-driven approaches to continuous control to jointly optimize the physical structure of legged robots and the control policy that enables them to locomote. The method maintains a distribution over designs and uses reinforcement learning to optimize a shared control policy to maximize the expected reward over the design distribution. I will then describe recent work that extends this approach to the coupled design and control of physically realizable soft robots. If time permits, I will conclude with a discussion of ongoing work that seeks to improve test-time generalization of the learned policies.


Matthew R. Walter is an assistant professor at the Toyota Technological Institute at Chicago. His interests revolve around the realization of intelligent, perceptually aware robots that are able to act robustly and effectively in unstructured environments, particularly with and alongside people. His research focuses on machine learning-based solutions that allow robots to learn to understand and interact with the people, places, and objects in their surroundings. Matthew has investigated these areas in the context of various robotic platforms, including autonomous underwater vehicles, self-driving cars, voice-commandable wheelchairs, mobile manipulators, and autonomous cars for (rubber) ducks. Matthew obtained his Ph.D. from the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution, where his thesis focused on improving the efficiency of inference for simultaneous localization and mapping.

Leave a Reply

Your email address will not be published. Required fields are marked *