Model ZooΒΆ
This section provides a curated collection of pre-trained models, reference implementations, and algorithm showcases developed within or integrated into the RoboOrchardLab framework. Our aim is to equip researchers and developers with ready-to-use tools to accelerate their work in embodied AI, facilitate benchmarking, and demonstrate the capabilities of our framework.
Here you will find models covering a range of tasks, including (but not limited to):
3D Object Detection
Vision-Language Actions (VLA)
(Comming soon) Robotic Grasping
Other perception and control tasks relevant to embodied agents.
We particularly highlight algorithms and models published by our team, providing a direct pathway to reproduce and extend our research contributions.
Each model entry typically includes:
A brief description of the model architecture, its target task, and key features.
Performance metrics achieved on standard benchmarks.
Links to the original paper and official project page (if applicable).
Citation information for proper attribution.
We are continuously working to expand this collection. Please browse the available models below. Contributions from the community are also highly encouraged!