Yuchi(Allan) Zhao

Hi! My name is Yuchi(Allan) Zhao. I recently graduated from Univeristy of Waterloo Mechatronics Engineering. Currently, I'm working as a research associate supervised by Prof. Alán Aspuru-Guzik, Prof. Florian Shkurti and Prof. Animesh Garg at University of Toronto.

I am interested in building robots that leverage multimodal inputs, including vision, language, and tactile sensing that allow robots to quickly adapt to unstructured environments, efficiently learn to solve complex tasks, and closely interact with humans

Email  /  CV  /  Scholar  /  Github  /  Linkedin  / 

profile photo
Publication

LGM

MVTrans: Multi-view Perception to See Transparent Objects
Yuchi Zhao*, Yi Ru Wang*, Haoping Xu*, Sagi Eppel, Alan Aspuru-Guzik, Florian Shkurti, Animesh Garg
Accepted to ICRA2023. < project > < paper >

Transparent object perception is a crucial skill for applications such as robot manipulation in household and laboratory settings. Our proposed method, MVTrans, is an end-to-end multi-view architecture with multiple perception capabilities, including depth estimation, segmentation, and pose estimation. Additionally, we establish a novel procedural photo-realistic dataset generation pipeline and create a large-scale transparent object detection dataset, Syn-TODD, which is suitable for training networks with all three modalities, RGB-D, stereo and multi-view RGB.

LGM

Large Language Models for Chemistry Robotics
Naruki Yoshikawa*, Marta Skreta*, Kourosh Darvish, Sebastian Arellano-Rubach, Zhi Ji, Lasse Bjørn Kristensen, Andrew Zou Li, Yuchi Zhao, Haoping Xu, Artur Kuramshin, Alan Aspuru-Guzik, Florian Shkurti, Animesh Garg
Accepted to Autonomous Robots (AURO). < paper >

Our approach, CLAIRIFY, combines automatic iterative prompting with program verification to ensure syntactically valid programs in a data-scarce domain-specific language that incorporates environmental constraints. The generated plan is executed through solving a constrained task and motion planning problem using PDDLStream solvers to prevent spillages of liquids as well as collisions in chemistry labs. We demonstrate the effectiveness of our approach in planning chemistry experiments, with plans successfully executed on a real robot using a repertoire of robot skills and lab tools. Specifically, we showcase the utility of our framework in pouring skills for various materials and two fundamental chemical experiments for materials synthesis: solubility and recrystallization.

LGM

An Adaptive Robotic Framework for Chemistry Lab Automation
Naruki Yoshikawa*, Andrew Zou Li*, Kourosh Darvish*, Yuchi Zhao*, Haoping Xu*, Alan Aspuru-Guzik, Animesh Garg, Florian Shkurti
< project > < paper >

In the process of materials discovery, chemists currently need to perform many laborious, time-consuming, and often dangerous lab experiments. To accelerate this process, we propose a framework for robots to assist chemists by performing lab experiments autonomously. The solution allows a general-purpose robot to perform diverse chemistry experiments and efficiently make use of available lab tools. Our architecture uses a modified PDDLStream solver for integrated task and constrained motion planning, which generates plans and motions that are guaranteed to be safe by preventing collisions and spillage. We present a modular framework that can scale to many different experiments, actions, and lab tools.

LGM

Fuse Local and Global Semantics in Representation Learning
Yuchi Zhao, Yuhao Zhou
< paper >

FLAGS aims at extract both global and local semantics from images to benefit various downstream tasks. It shows promising results under common linear evaluation protocol. We also conduct detection and segmentation on PASCAL VOC and COCO to show the representations extracted by FLAGS are transferable.

Research Experience

LGM

Univeristy of Toronto - The Matter Lab
Research Associate
Duration: Sept. 2023 - Present

Researching in general-purpose robotic frameworks for chemistry lab automation and self-driving lab under Prof. Alán Aspuru-Guzik, Prof. Florian Shkurti and Prof. Animesh Garg. My research mainly focus on transparent object pose estimation, task planning, and robot manipulation.

LGM

Univeristy of Toronto - RVL Lab
Research Associate
Duration: July 2021 - Sept. 2022

Researched in robot manipulation system for chemistry lab automation under Prof. Florian Shkurti and Prof. Animesh Garg. My research mainly focus on transparent object pose estimation, task and motion planning (TAMP), and simulation.

LGM

Huawei Technologies Co., Ltd.
Computer Vision Researcher
Duration: Sept. 2020 - Jan. 2021

Researched in self-supervised representation learning and class-agnostic counting, and co-authored the paper about contrastive learning. Mentored by Yuhao Zhou and Dr. Hailin Hu.

Work Experience

LGM

Ford Motor Company
Firmware Developer
Duration: May 2021 - Aug. 2021

Developed code for bootloader and BSP on Qualcomm 5G chips used in the telematic control unit (TCU) of vehicles.

LGM

Miovision Technologies Inc.
Computer Vision Engineer
Duration: Jan. 2020 - Apr. 2020

Built the fisheye image semantic segmentation system for traffic scene understanding.

LGM

Eagle Vision Systems (Acquired by McNeilus)
Deep Learning Engineer
Duration: July 2019 - Jan. 2020

Worked on garbage bins detection and classfication for automating the operation of truck robotic lift arm.

LGM

HiRide Share Ltd. (Acquired by Facedrive)
Full Stack App Developer
Duration: May 2019 - Aug. 2019

Developed multiple features including navigation and real-time messaging for the HiRide App.

Interest
  • Positive psychology
  • Stamp carving
  • Photography
  • Clarinet
  • Badminton

--Reference--


Last updated: Nov 13th, 2023 by Yuchi(Allan) Zhao