|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- robotics |
|
library_name: robotics |
|
--- |
|
|
|
# MemoryBench Dataset |
|
|
|
MemoryBench is a benchmark dataset designed to evaluate spatial memory and action recall in robotic manipulation. This dataset accompanies the **SAM2Act+** framework, introduced in the paper *[SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation](https://huggingface.co/papers/2501.18564)*. For detailed task descriptions and more information about this paper, please visit SAM2Act's [website](https://sam2act.github.io). Code can be found at [https://github.com/sam2act/sam2act](https://github.com/sam2act/sam2act). |
|
|
|
The dataset contains scripted demonstrations for three memory-dependent tasks designed in RLBench (same version as the one used in [PerAct](https://peract.github.io/)): |
|
|
|
- **Reopen Drawer**: Tests 3D spatial memory along the z-axis. |
|
- **Put Block Back**: Evaluates 2D spatial memory along the x-y plane. |
|
- **Rearrange Block**: Requires backward reasoning based on prior actions. |
|
|
|
## Dataset Structure |
|
|
|
The dataset is organized as follows: |
|
``` |
|
data/ |
|
βββ train/ # 100 episodes per task |
|
βββ test/ # 25 episodes per task |
|
βββ files/ # task files (.ttm & .py) |
|
``` |
|
|
|
- **data/train/**: Contains three zip files, each corresponding to one of the three tasks. Each zip file contains **100** scripted demonstrations for training. |
|
- **data/test/**: Contains the same three zip files, but each contains **25** held-out demonstrations for evaluation. |
|
- **data/files/**: Includes necessary `.ttm` and `.py` files for running evaluation. |
|
|
|
## Usage |
|
|
|
This dataset is designed for use in the same manner as the RLBench 18 Tasks proposed by [PerAct](https://peract.github.io/). You can follow the same usage guidelines or stay updated with SAM2Act's [code repository](https://github.com/sam2act/sam2act) for further instructions. |
|
|
|
## Acknowledgement |
|
|
|
We would like to acknowledge [Haoquan Fang](https://hq-fang.github.io/) for leading the conceptualization of MemoryBench, providing key ideas and instructions for task design, and [Wilbert Pumacay](https://wpumacay.github.io/) for implementing the tasks and ensuring their seamless integration into the dataset. Their combined efforts, along with the oversight of [Jiafei Duan](https://duanjiafei.com/) and all co-authors, were essential in developing this benchmark for evaluating spatial memory in robotic manipulation. |
|
|
|
## Citation |
|
|
|
If you use this dataset, please cite the SAM2Act paper: |
|
|
|
```bibtex |
|
@misc{fang2025sam2act, |
|
title={SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation}, |
|
author={Haoquan Fang and Markus Grotz and Wilbert Pumacay and Yi Ru Wang and Dieter Fox and Ranjay Krishna and Jiafei Duan}, |
|
year={2025}, |
|
eprint={2501.18564}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.RO}, |
|
url={https://arxiv.org/abs/2501.18564}, |
|
} |
|
``` |