Safetensors
custom_code
mranzinger commited on
Commit
acf7d5d
·
verified ·
1 Parent(s): 3138862

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +153 -121
README.md CHANGED
@@ -1,199 +1,231 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
11
 
12
- ## Model Details
13
 
14
- ### Model Description
 
15
 
16
- <!-- Provide a longer summary of what this model is. -->
 
 
 
 
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
 
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
 
43
 
44
- [More Information Needed]
45
 
46
- ### Downstream Use [optional]
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
 
51
 
52
- ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
55
 
56
- [More Information Needed]
57
 
58
- ## Bias, Risks, and Limitations
 
 
 
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
 
62
- [More Information Needed]
 
 
 
63
 
64
- ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
 
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
 
69
 
70
- ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
 
 
73
 
74
- [More Information Needed]
 
 
75
 
76
- ## Training Details
 
77
 
78
- ### Training Data
 
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
 
 
81
 
82
- [More Information Needed]
83
 
84
- ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
87
 
88
- #### Preprocessing [optional]
 
 
 
 
 
 
 
 
89
 
90
- [More Information Needed]
 
 
 
 
91
 
 
92
 
93
- #### Training Hyperparameters
 
 
 
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
 
97
- #### Speeds, Sizes, Times [optional]
 
 
 
98
 
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
 
101
- [More Information Needed]
102
 
103
- ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
 
107
- ### Testing Data, Factors & Metrics
108
 
109
- #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
112
 
113
- [More Information Needed]
114
 
115
- #### Factors
116
 
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
 
119
- [More Information Needed]
120
 
121
- #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
 
125
- [More Information Needed]
126
 
127
- ### Results
128
 
129
- [More Information Needed]
130
 
131
- #### Summary
132
 
 
 
133
 
 
134
 
135
- ## Model Examination [optional]
136
 
137
- <!-- Relevant interpretability work for the model goes here -->
138
 
139
- [More Information Needed]
140
 
141
- ## Environmental Impact
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
 
 
 
144
 
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
 
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
- ## Technical Specifications [optional]
 
 
 
 
 
 
 
 
 
 
 
 
154
 
155
- ### Model Architecture and Objective
156
 
157
- [More Information Needed]
158
 
159
- ### Compute Infrastructure
 
 
 
 
 
 
 
160
 
161
- [More Information Needed]
162
 
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
1
  ---
2
+ license: other
3
+ license_name: nvidia-open-model-license
4
+ license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
5
  ---
6
 
7
+ # Model Overview
8
 
 
9
 
10
+ [[**Github**](https://github.com/NVlabs/RADIO)] [[**CVPR 2025**](https://arxiv.org/abs/2412.07679)] [[**CVPR 2024**](https://arxiv.org/abs/2312.06709)]
11
 
12
 
13
+ ## Description
14
 
15
+ This model performs visual feature extraction.
16
+ For instance, RADIO generates image embeddings that can be used by a downstream model to classify images.
17
 
18
+ C-RADIOv2 models are available in multiple sizes:
19
+ * Base (90M parameters).
20
+ * Large (320M parameters).
21
+ * Huge (653M parameters).
22
+ * Gigantic (1.1B parameters).
23
 
24
+ C-RADIOv2 was trained for 1M steps (400k more steps than v1), using inverse frequency sampling for data balancing, and [PHI Standardization](https://arxiv.org/abs/2410.01680) for teacher distribution balancing.
25
 
26
+ This model is ready for commercial/non-commercial use.
 
 
 
 
 
 
27
 
28
+ ### License/Terms of Use
29
 
30
+ GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
31
 
32
+ ## Deployment Geography
 
 
33
 
34
+ Global.
35
 
36
+ ## Use Case
37
 
38
+ The embeddings generated by this model are expected to be used by a downstream application.
39
+ For example:
40
 
41
+ * Image-level understanding (image classification, curation, etc.).
42
+ * Dense processing (semantic segmentation, depth estimation, etc.).
43
+ * Integration into a Vision-Language Model.
44
 
45
+ ## Release Date
46
 
47
+ Huggingface: 03/26/2025 via [RADIO Collection of Models](https://huggingface.co/collections/nvidia/radio-669f77f1dd6b153f007dd1c6).
48
 
49
+ ## References
50
 
51
+ * \[CVPR 2025\] [**RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Models**](https://arxiv.org/abs/2412.07679)
52
+ * \[CVPR 2024\] [**AM-RADIO: Agglomerative Vision Foundation Model - Reduce All Domains Into One**](https://arxiv.org/abs/2312.06709)
53
 
54
+ ## Model Architecture
55
 
56
+ **Architecture Type:** Neural Network <br>
57
+ **Network Architecture:** Vision Transformer <br>
58
 
59
+ ## Input
60
 
61
+ **Input Type(s):** Image <br>
62
+ **Input Format(s):** Red, Green, Blue (RGB) <br>
63
+ **Input Parameters:** Two Dimensional (2D) <br>
64
+ **Other Properties Related to Input:** Image resolutions up to 2048x2028 in increments of 16 pixels <br>
65
 
66
+ ## Output
67
 
68
+ **Output Type(s):** Embeddings <br>
69
+ **Output Format:** Tensor <br>
70
+ **Output Parameters:** 2D <br>
71
+ **Other Properties Related to Output:** Downstream model required to leverage image features <br>
72
 
73
+ ## Usage:
74
 
75
+ RADIO will return a tuple with two tensors.
76
+ The `summary` is similar to the `cls_token` in ViT and is meant to represent the general concept of the entire image.
77
+ It has shape `(B,C)` with `B` being the batch dimension, and `C` being some number of channels.
78
+ The `spatial_features` represent more localized content which should be suitable for dense tasks such as semantic segmentation, or for integration into an LLM.
79
 
80
+ ```python
81
+ import torch
82
+ from PIL import Image
83
+ from transformers import AutoModel, CLIPImageProcessor
84
 
85
+ hf_repo = "nvidia/C-RADIOv2-L"
86
 
87
+ image_processor = CLIPImageProcessor.from_pretrained(hf_repo)
88
+ model = AutoModel.from_pretrained(hf_repo, trust_remote_code=True)
89
+ model.eval().cuda()
90
 
91
+ image = Image.open('./assets/radio.png').convert('RGB')
92
+ pixel_values = image_processor(images=image, return_tensors='pt', do_resize=True).pixel_values
93
+ pixel_values = pixel_values.cuda()
94
 
95
+ summary, features = model(pixel_values)
96
+ ```
97
 
98
+ Spatial features have shape `(B,T,D)` with `T` being the flattened spatial tokens, and `D` being the channels for spatial features. Note that `C!=D` in general.
99
+ Converting to a spatial tensor format can be done using the downsampling size of the model, combined with the input tensor shape. For RADIO, the patch size is 16.
100
 
101
+ ```Python
102
+ from einops import rearrange
103
+ spatial_features = rearrange(spatial_features, 'b (h w) d -> b d h w', h=x.shape[-2] // patch_size, w=x.shape[-1] // patch_size)
104
+ ```
105
 
106
+ The resulting tensor will have shape `(B,D,H,W)`, as is typically seen with computer vision models.
107
 
108
+ ## Software Integration
109
 
110
+ **Runtime Engine(s):**
111
+ * TAO- 24.10 <br>
112
 
113
+ **Supported Hardware Microarchitecture Compatibility:** <br>
114
+ * NVIDIA Ampere <br>
115
+ * NVIDIA Blackwell <br>
116
+ * NVIDIA Jetson <br>
117
+ * NVIDIA Hopper <br>
118
+ * NVIDIA Lovelace <br>
119
+ * NVIDIA Pascal <br>
120
+ * NVIDIA Turing <br>
121
+ * NVIDIA Volta <br>
122
 
123
+ **[Preferred/Supported] Operating System(s):** <br>
124
+ * Linux
125
+ * Linux 4 Tegra
126
+ * QNX
127
+ * Windows
128
 
129
+ ## Model Version(s)
130
 
131
+ * C-RADIOv2-B (90M parameters).
132
+ * C-RADIOv2-L (320M parameters).
133
+ * C-RADIOv2-H (653M parameters).
134
+ * C-RADIOv2-G (1.8B parameters).
135
 
136
+ **Links:**
137
 
138
+ * https://huggingface.co/nvidia/C-RADIOv2-B
139
+ * https://huggingface.co/nvidia/C-RADIOv2-L
140
+ * https://huggingface.co/nvidia/C-RADIOv2-H
141
+ * https://huggingface.co/nvidia/C-RADIOv2-g
142
 
143
+ # Training and Evaluation Datasets
144
 
145
+ ## Training Dataset
146
 
147
+ NV-CC-Img-Text-Dataset <br>
148
 
149
+ ### Data Collection Method by dataset
150
 
151
+ * Automated <br>
152
 
153
+ ### Labeling Method by dataset
154
 
155
+ * Not Applicable (no labels are needed)
156
 
157
+ ### Properties
158
 
159
+ * 700 Million Images <br>
160
 
161
+ ## Evaluation Dataset
162
 
163
+ **Link:** [ImageNet](https://www.image-net.org/) <br>
164
 
165
+ ### Data Collection Method by dataset
166
 
167
+ * Automated <br>
168
 
169
+ ### Labeling Method by dataset
170
 
171
+ * Human <br>
172
 
173
+ **Properties:** This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images.<br>
174
 
175
+ ## Inference
176
 
177
+ **Engine:** PyTorch <br>
178
+ **Test Hardware:** A100 <br>
179
 
180
+ ## Ethical Considerations
181
 
182
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
183
 
184
+ For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards below.
185
 
186
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
187
 
188
+ ### Bias
189
 
190
+ Field | Response
191
+ :---------------------------------------------------------------------------------------------------|:---------------
192
+ Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None
193
+ Measures taken to mitigate against unwanted bias: | None
194
 
 
195
 
196
+ ### Explainability
 
 
 
 
197
 
198
+ Field | Response
199
+ :------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
200
+ Intended Application & Domain: | Visual Feature Extraction
201
+ Model Type: | Vision Transformer
202
+ Intended Users: | Developers of downstream vision applications
203
+ Output: | Image embeddings
204
+ Describe how the model works: | The model takes an image as input, processes the image through multiple transformer blocks, and outputs summary and patch embeddings.
205
+ Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable
206
+ Technical Limitations: | This model generates image embeddings that can be used by a downstream model to, for example, classify images. The downstream model must be trained to leverage the visual embeddings.
207
+ Verified to have met prescribed NVIDIA quality standards: | Yes
208
+ Performance Metrics: | Image classification accuracy, semantic segmentation mean-over-intersection.
209
+ Potential Known Risks: | This model is only tested on input resolutions ranging from 256 to 2048, in increments of 16 pixels. Additionally, the generated embeddings might fail to disambiguate differences that appear evident to humans (e.g. two images showing different breeds of dogs might in fact produce very similar embeddings). Domain-specific evaluation is required for the target application.
210
+ Licensing: | [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
211
 
 
212
 
213
+ ### Privacy
214
 
215
+ Field | Response
216
+ :----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
217
+ Generatable or reverse engineerable personal data? | None
218
+ Personal data used to create this model? | None
219
+ How often is dataset reviewed? | Before Every Release
220
+ Is there provenance for all datasets used in training? | Yes
221
+ Does data labeling (annotation, metadata) comply with privacy laws? | Yes
222
+ Is data compliant with data subject requests for data correction or removal, if such a request was made? | Yes
223
 
224
+ ### Safety
225
 
226
+ Field | Response
227
+ :---------------------------------------------------|:----------------------------------
228
+ Model Application(s): | Generation of visual embeddings
229
+ Describe the life critical impact (if present). | Not Applicable
230
+ Use Case Restrictions: | Abide by NVIDIA Open Model License Agreement
231
+ Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.