Foundation Vision-language-action Model
Collection
2 items
•
Updated
This model was produced by fine-tuning the SpatialVLA model via LoRA (r=32) on the fractal and bridge dataset. We made a few modifications to the training dataset to improve final performance (see the SpatialVLA paper for details).
See the SpatialVLA GitHub README for instructions on how to run and evaluate this model on WidowX Robot tasks.
BibTeX:
@misc{qu2025spatialvlaexploringspatialrepresentations,
title={SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model},
author={Delin Qu and Haoming Song and Qizhi Chen and Yuanqi Yao and Xinyi Ye and Yan Ding and Zhigang Wang and JiaYuan Gu and Bin Zhao and Dong Wang and Xuelong Li},
year={2025},
eprint={2501.15830},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2501.15830},
}
Base model
google/paligemma2-3b-pt-224