I am a first-year Ph.D. student in Computer Science at Stanford University.
I recently received my B.A. in Computer Science from UC Berkeley, where I was fortunate to be advised by Deepak Pathak, Igor Mordatch, and Pieter Abbeel. After graduation, I interned in the Robotics team at Google Brain.
Before my time at Berkeley, I was also fortunate to work with Zhuowen Tu at UC San Diego.
[Feb 2022] Invited Talks @ Google, FAIR, Sea AI Lab, ByteDance AI Lab on our language planner project.
[Dec 2021] Invited Talk @ Intel AI Lab on generalization across objects and morphologies in robot learning.
I'm broadly interested in robot learning.
The goal of my research is to build agents that can make intelligent decisions in embodied environments and have generalizable motor skills in challenging scenarios. Recently, I am interested in leveraging large pre-trained models to improve generalization of robot capabilities.
PaLM-E: An Embodied Multimodal Language Model
Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, Pete Florence
Google AI Blog
Language models can digest real-world sensor modalities (e.g., images) to be embodied in the physical world. The largest model with 562B parameters is a generalist agent across language, vision, and robot planning.
We formulate a token decoding procedure applying large language model to robotics settings. Tokens are selected based on likelihood under the language model and a set of grounded models, such as affordance, safety, and preference functions.
Using hierarchical code generation, large language models can write robot policy code that exhibits spatial-geometric reasoning when given abstract natural language instructions, without any additional training.
Using various sources to provide textual embodied feedback, frozen large language models can articulate a grounded "thought process" for robots, solving many challenging long-horizon robotics tasks, even under adversarial perturbation.