Welcome to HorizonRobotics RobotLab#

About#

Welcome to the Robot Lab of Horizon Robotics. Our mission is to build cutting edge and fully open-sourced Embodied AI Algorithms and infrastructures. Together with Horizon’s computing platform, we wish to empower the future of robots with all types of embodiment.

News#

  • [2025.09] 2 papers accepted to NeurIPS25

  • [2025.06] 1 paper accepted to IROS25

  • [2025.06] In RoboTwin Dual-Arm Collaboration Challenge 2nd MEIS Workshop@CVPR2025, we won 🥇first place in Real-world Track and 🥈second place in Simulation Track

  • [2025.05] In International Humanoid Robot Skills Competition held by Zhangjiang, Shanghai, we won 🥇first place in the RoboTwin Bimanual Collaboration Simulation Challenge

  • [2025.05] We are holding the “Edge-deployable Manipulation Algorithms” track in the 19th “Challenge Cup” (挑战杯) National College Extracurricular Academic Science and Technology Competition

  • [2025.05] The first version of RoboOrchard Lab released

Projects#

RoboOrchard#

RoboOrchardLab Logo

RoboOrchard is a collection of tools and frameworks designed to streamline the entire process of robotics research and development, from simulation to real-world application.

  • RoboOrchardBuild: A set of tools designed to simplify the build process for the RoboOrchard project.

    Code | Pypi

  • RoboOrchardCore: The core package of the project RoboOrchard, which provides the basic infrastructure for the framework, such as configuration management, data structure, environment abstraction, etc.

    Code | Docs | Pypi

  • RoboOrchardSchemas: A library of data models for the RoboOrchard project.

    Code | Pypi

  • RoboOrchardLab: Training and evaluating embodied AI algorithm

    Code | Docs | Pypi | Model Zoo

  • RoboOrchardGraspLabel: A set of tools to generate grasp labels for objects

    Code

  • RoboOrchardDataRecorder: A comprehensive and robust suite of tools designed for high-fidelity data acquisition in robotics, with a primary focus on the ROS 2 ecosystem.

    Code

  • RoboOrchardDeploy: A set of tools for deploy, including calibration and model inference.

    Code

Publications#

HRDT Logo

H-RDT: Human Manipulation Enhanced Bimanual Robotic Manipulation

Hongzhe Bi, Lingxuan Wu, Tianwei Lin, Hengkai Tan, Zhizhong Su, Hang Su, Jun Zhu

Arxiv 2025

Webpage | Paper | Code

MonoDream Logo

MonoDream: Monocular Vision-Language Navigation with Panoramic Dreaming

Shuo Wang, Yongcai Wang, Wanting Li, Yucheng Wang, Maiyue Chen, Kaihui Wang, Zhizhong Su, Xudong Cai, Yeying Jin, Deying Li, Zhaoxin Fan

Arxiv 2025

Webpage | Paper

Uni3R Logo

Uni3R: Unified 3D Reconstruction and Semantic Understanding via Generalizable Gaussian Splatting from Unposed Multi-View Images

Xiangyu Sun, Haoyi Jiang, Liu Liu, Seungtae Nam, Gyeongjin Kang, Xinjie Wang, Wei Sui, Zhizhong Su, Wenyu Liu, Xinggang Wang, Eunbyung Park

Arxiv 2025

Webpage | Paper | Code

IGFuse Logo

IGFuse: Interactive 3D Gaussian Scene Reconstruction via Multi-Scans Fusion

Wenhao Hu, Zesheng Li, Haonan Zhou, Liu Liu, Xuexiang Wen, Zhizhong Su, Xi Li, Gaoang Wang

Arxiv 2025

Webpage | Paper | Code

DreamLifting Logo

DreamLifting: A Plug-in Module Lifting MV Diffusion Models for 3D Asset Generation

Ze-Xin Yin, Jiaxiong Qiu, Liu Liu, Xinjie Wang, Wei Sui, Zhizhong Su, Jian Yang, Jin Xie

Arxiv 2025

Webpage

FineGrasp Logo

FineGrasp: Towards Robust Grasping for Delicate Objects

Yun Du, Mengao Zhao, Tianwei Lin, Yiwei Jin, Chaodong Huang, Zhizhong Su

Arxiv 2025

Webpage | Paper | Code

EmbodiedGen Logo

EmbodiedGen: Towards a Generative 3D World Engine for Embodied Intelligence

Xinjie Wang, Liu Liu, Yu Cao, Ruiqi Wu, Wenkang Qin, Dehui Wang, Wei Sui, Zhizhong Su

Arxiv 2025

Webpage | Paper | Code | HF Demo

RoboTransfer Logo

RoboTransfer: Geometry-Consistent Video Diffusion for Robotic Visual Policy Transfer

Liu Liu, Xiaofeng Wang, Guosheng Zhao, Keyu Li, Wenkang Qin, Jiaxiong Qiu, Zheng Zhu, Guan Huang, Zhizhong Su

Arxiv 2025

Webpage | Paper | Code

DIPO Logo

DIPO: Dual-State Images Controlled Articulated Object Generation Powered by Diverse Data

Ruqi Wu, Xinjie Wang, Liu Liu, Chunle Guo, Jiaxiong Qiu, Chongyi Li, Lichao Huang, Zhizhong Su, Ming-Ming Cheng

NeurIPS 2025

Webpage | Paper | Code

SEM Logo

SEM: Enhancing Spatial Understanding for Robust Robot Manipulation

Xuewu Lin, Tianwei Lin, Lichao Huang, Hongyu Xie, Yiwei Jin, Keyu Li, Zhizhong Su

Arxiv 2025

Paper | Code

AuxThink Logo

Aux-Think: Exploring Reasoning Strategies for Data-Efficient Vision-Language Navigation

Shuo Wang, Yongcai Wang, Wanting Li, Xudong Cai, Yucheng Wang, Maiyue Chen, Kaihui Wang, Zhizhong Su, Deying Li, Zhaoxin Fan

NeurIPS 2025

Webpage | Paper | Code

GeomFlow Logo

GeoFlow-SLAM: A Robust Tightly-Coupled RGBD-Inertial Fusion SLAM for Dynamic Legged Robotics

Tingyang Xiao, Xiaolin Zhou, Liu Liu, Wei Sui, Wei Feng, Jiaxiong Qiu, Xinjie Wang, Zhizhong Su

IROS 2025

Paper | Code

GaussTR Logo

GaussTR: Foundation model-aligned gaussian transformer for self-supervised 3d spatial understanding

Haoyi Jiang, Liu Liu, Tianheng Cheng, Xinjie Wang, Tianwei Lin, Zhizhong Su, Wenyu Liu, Xinggang Wang

CVPR 2025

Webpage | Paper | Code

BIP3D Logo

BIP3D: Bridging 2d images and 3d perception for embodied intelligence

Xuewu Lin, Tianwei Lin, Lichao Huang, Hongyu Xie, Zhizhong Su

CVPR 2025

Webpage | Paper | Code

GOC Logo

Gaussian Object Carver: Object-Compositional Gaussian Splatting with surfaces completion

Liu Liu, Xinjie Wang, Jiaxiong Qiu, Tianwei Lin, Xiaolin Zhou, Zhizhong Su

Arxiv 2024

Paper

GLS Logo

GLS: Geometry-aware 3D Language Gaussian Splatting

Jiaxiong Qiu, Liu Liu, Xinjie Wang, Tianwei Lin, Wei Sui, Zhizhong Su

Arxiv 2024

Webpage | Paper | Code

Join US#

Scan the QR code to view our open positions

JoinUs