Push model using huggingface_hub.
Browse files- README.md +5 -104
- pytorch_model.bin +1 -1
README.md
CHANGED
|
@@ -1,108 +1,9 @@
|
|
| 1 |
---
|
| 2 |
-
license: mit
|
| 3 |
-
language:
|
| 4 |
-
- en
|
| 5 |
-
pipeline_tag: robotics
|
| 6 |
-
library_name: transformers
|
| 7 |
tags:
|
| 8 |
-
-
|
| 9 |
-
-
|
| 10 |
-
- multimodal
|
| 11 |
-
- pretraining
|
| 12 |
-
- vla
|
| 13 |
-
- diffusion
|
| 14 |
-
- rdt
|
| 15 |
---
|
| 16 |
-
# RDT-170M
|
| 17 |
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
All the [code](https://github.com/thu-ml/RoboticsDiffusionTransformer/tree/main?tab=readme-ov-file), pre-trained model weights, and [data](https://huggingface.co/datasets/robotics-diffusion-transformer/rdt-ft-data) are licensed under the MIT license.
|
| 23 |
-
|
| 24 |
-
Please refer to our [project page](https://rdt-robotics.github.io/rdt-robotics/) and [paper](https://arxiv.org/pdf/2410.07864) for more information.
|
| 25 |
-
|
| 26 |
-
## Model Details
|
| 27 |
-
|
| 28 |
-
- **Developed by:** The RDT team consisting of researchers from the [TSAIL group](https://ml.cs.tsinghua.edu.cn/) at Tsinghua University
|
| 29 |
-
- **Task Type:** Vision-Language-Action (language, image => robot actions)
|
| 30 |
-
- **Modle Type:** Diffusion Policy with Transformers
|
| 31 |
-
- **License:** MIT
|
| 32 |
-
- **Language(s) (NLP):** en
|
| 33 |
-
- **Multi-Modal Encoders:**
|
| 34 |
-
- **Vision Backbone:** [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384)
|
| 35 |
-
- **Language Model:** [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl)
|
| 36 |
-
- **Pre-Training Datasets:** 46 datasets consisting of [RT-1 Dataset](https://robotics-transformer1.github.io/), [RH20T](https://rh20t.github.io/), [DROID](https://droid-dataset.github.io/), [BridgeData V2](https://rail-berkeley.github.io/bridgedata/), [RoboSet](https://robopen.github.io/roboset/), and a subset of [Open X-Embodiment](https://robotics-transformer-x.github.io/). See [this link](https://github.com/thu-ml/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md#download-and-prepare-datasets) for a detailed list.
|
| 37 |
-
- **Repository:** https://github.com/thu-ml/RoboticsDiffusionTransformer
|
| 38 |
-
- **Paper :** https://arxiv.org/pdf/2410.07864
|
| 39 |
-
- **Project Page:** https://rdt-robotics.github.io/rdt-robotics/
|
| 40 |
-
|
| 41 |
-
## Uses
|
| 42 |
-
|
| 43 |
-
RDT takes language instruction, RGB images (of up to three views), control frequency (if any), and proprioception as input and predicts the next 64 robot actions.
|
| 44 |
-
RDT supports control of almost all robot manipulators with the help of the unified action space, which
|
| 45 |
-
includes all the main physical quantities of the robot manipulator (e.g., the end-effector and joint, position and velocity, and the wheeled locomotion).
|
| 46 |
-
To deploy on your robot platform, you need to fill the relevant quantities of the raw action vector into the unified space vector. See [our repository](https://github.com/thu-ml/RoboticsDiffusionTransformer) for more information.
|
| 47 |
-
|
| 48 |
-
**Out-of-Scope**: Due to the embodiment gap, RDT cannot yet generalize to new robot platforms (not seen in the pre-training datasets).
|
| 49 |
-
In this case, we recommend collecting a small dataset of the target robot and then using it to fine-tune RDT.
|
| 50 |
-
See [our repository](https://github.com/thu-ml/RoboticsDiffusionTransformer) for a tutorial.
|
| 51 |
-
|
| 52 |
-
Here's an example of how to use the RDT-1B model for inference on a robot:
|
| 53 |
-
```python
|
| 54 |
-
# Please first clone the repository and install dependencies
|
| 55 |
-
# Then switch to the root directory of the repository by "cd RoboticsDiffusionTransformer"
|
| 56 |
-
|
| 57 |
-
# Import a create function from the code base
|
| 58 |
-
from scripts.agilex_model import create_model
|
| 59 |
-
|
| 60 |
-
# Names of cameras used for visual input
|
| 61 |
-
CAMERA_NAMES = ['cam_high', 'cam_right_wrist', 'cam_left_wrist']
|
| 62 |
-
config = {
|
| 63 |
-
'episode_len': 1000, # Max length of one episode
|
| 64 |
-
'state_dim': 14, # Dimension of the robot's state
|
| 65 |
-
'chunk_size': 64, # Number of actions to predict in one step
|
| 66 |
-
'camera_names': CAMERA_NAMES,
|
| 67 |
-
}
|
| 68 |
-
pretrained_vision_encoder_name_or_path = "google/siglip-so400m-patch14-384"
|
| 69 |
-
# Create the model with the specified configuration
|
| 70 |
-
model = create_model(
|
| 71 |
-
args=config,
|
| 72 |
-
dtype=torch.bfloat16,
|
| 73 |
-
pretrained_vision_encoder_name_or_path=pretrained_vision_encoder_name_or_path,
|
| 74 |
-
pretrained='robotics-diffusion-transformer/rdt-1b',
|
| 75 |
-
control_frequency=25,
|
| 76 |
-
)
|
| 77 |
-
|
| 78 |
-
# Start inference process
|
| 79 |
-
# Load the pre-computed language embeddings
|
| 80 |
-
# Refer to scripts/encode_lang.py for how to encode the language instruction
|
| 81 |
-
lang_embeddings_path = 'your/language/embedding/path'
|
| 82 |
-
text_embedding = torch.load(lang_embeddings_path)['embeddings']
|
| 83 |
-
images: List(PIL.Image) = ... # The images from last 2 frames
|
| 84 |
-
proprio = ... # The current robot state
|
| 85 |
-
# Perform inference to predict the next `chunk_size` actions
|
| 86 |
-
actions = policy.step(
|
| 87 |
-
proprio=proprio,
|
| 88 |
-
images=images,
|
| 89 |
-
text_embeds=text_embedding
|
| 90 |
-
)
|
| 91 |
-
```
|
| 92 |
-
|
| 93 |
-
<!-- RDT-1B supports finetuning on custom datasets, deploying and inferencing on real robots, and retraining the model.
|
| 94 |
-
Please refer to [our repository](https://github.com/GeneralEmbodiedSystem/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md) for all the above guides. -->
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
## Citation
|
| 98 |
-
|
| 99 |
-
If you find our work helpful, please cite us:
|
| 100 |
-
```bibtex
|
| 101 |
-
@article{liu2024rdt,
|
| 102 |
-
title={RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation},
|
| 103 |
-
author={Liu, Songming and Wu, Lingxuan and Li, Bangguo and Tan, Hengkai and Chen, Huayu and Wang, Zhengyi and Xu, Ke and Su, Hang and Zhu, Jun},
|
| 104 |
-
journal={arXiv preprint arXiv:2410.07864},
|
| 105 |
-
year={2024}
|
| 106 |
-
}
|
| 107 |
-
```
|
| 108 |
-
Thank you!
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
tags:
|
| 3 |
+
- model_hub_mixin
|
| 4 |
+
- pytorch_model_hub_mixin
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
---
|
|
|
|
| 6 |
|
| 7 |
+
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
|
| 8 |
+
- Library: https://huggingface.co/robotics-diffusion-transformer/rdt-1b
|
| 9 |
+
- Docs: [More Information Needed]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
pytorch_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 332520250
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2e87fccee0292b25ee4f714670dd7d8da8876ea092da3744db3292db070b7133
|
| 3 |
size 332520250
|