I am interested in planning and perception for robotics, and how we can enable robots to
reason about long-horizon tasks and generalize in the real-world.
We distill features from 2D foundation models into 3D feature fields, and enable
few-shot language-guided manipulation that generalizes across object poses, shapes,
appearances and categories.
We combine generalized neural network policies (Action Schema Networks) with search
algorithms (MCTS) to exploit the strengths and overcome the weaknesses of each to solve
probabilistic planning problems.
We introduce STRIPS-HGN, a hypergraph neural network capable of
learning domain-independent planning heuristics by exploiting the structure induced by
the delete relaxation of a planning problem. Our learned heuristics generalize across
initial and goal states, problem sizes, and even domains.
We introduce novel techniques which leverage
Action Schema Networks (ASNets)
to perform simulations in Monte-Carlo Tree Search (MCTS), and in the selection phase
of MCTS. We show that these synergies improve suboptimal learning, robustness and
planning performance.