bryandts's picture
Update README.md
669b195 verified
---
license: mit
task_categories:
- image-to-image
- text-to-image
language:
- en
pretty_name: Robotic Action Prediction Dataset using RoboTwin
size_categories:
- 10K<n<100K
---
# Robotic Action Prediction Dataset
## Dataset Description
This dataset contains triplets of (current observation, action instruction, future observation) for training models to predict future frames of robotic actions.
## Dataset Structure
### Data Fields
- `current_frame`: Input image (RGB) of the current observation
- `instruction`: Textual description of the action to perform
- `future_frame`: Target image (RGB) showing the expected outcome 50 frames later
### Data Splits
The dataset contains:
- Total samples: ~91,303 (300 episodes × 304 frames average per episode)
- Tasks:
- `block_hammer_beat`: "beat the block with the hammer"
- `block_handover`: "handover the blocks"
- `blocks_stack_easy`: "stack blocks"
### Dataset Statistics
| Task | Episodes | Frames per Episode |
|---------------------|----------|--------------------|
| block_hammer_beat | 100 | 200-300 |
| block_handover | 100 | 400-500 |
| blocks_stack_easy | 100 | 400-500 |
## Dataset Creation
### Source Data
- **Simulation Environment:** RoboTwin
- **Image Resolution:** At least 128×128 pixels
- **Frame Offset:** 50 frames between input and target