Datasets:

Languages:
Portuguese
ArXiv:
License:
File size: 7,456 Bytes
b68b643
937be68
 
 
 
 
 
 
 
 
075bc6b
a7ded29
 
b68b643
937be68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a65065d
937be68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70cd793
937be68
 
 
 
 
 
 
 
 
70cd793
937be68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70cd793
 
 
 
 
 
 
 
 
937be68
70cd793
937be68
70cd793
 
937be68
70cd793
937be68
70cd793
 
 
 
 
 
937be68
 
70cd793
 
937be68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fafe167
937be68
 
 
fafe167
937be68
 
 
a65065d
937be68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a65065d
 
937be68
 
a65065d
937be68
a65065d
937be68
 
 
 
 
 
a65065d
 
 
937be68
a65065d
7be0653
 
 
 
 
 
 
937be68
a65065d
 
 
 
 
 
 
 
 
 
 
 
 
 
937be68
a65065d
937be68
 
 
 
 
 
 
 
 
 
 
 
 
 
a7ded29
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
---
license: other
language:
- pt
pretty_name: ViTucano-Pretrain
task_categories:
- image-to-text
- text-generation
size_categories:
- 100K<n<1M
viewer: false
tags:
- image-to-text
---
# ViTucano-Pretrain

## Table of Contents

- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Aknowlegments](#aknowlegments)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** https://huggingface.co/datasets/TucanoBR/ViTucano-Pretrain
- **Repository:** https://huggingface.co/datasets/TucanoBR/ViTucano-Pretrain
- **Paper:** [Tucano: Advancing Neural Text Generation for Portuguese](https://arxiv.org/abs/2411.07854)
- **Point of Contact:** [Nk-correa](mailto:[email protected])

### Dataset Summary

ViTucano-Pretrain is a translation of the original [liuhaotian/LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain), obtained via Google's translation API. LLaVA Visual Instruct Pretrain LCS-558K is a subset of the LAION/CC/SBU dataset, filtered with a more balanced concept coverage distribution. This dataset was used to train the **ViTucano**, our first attempt at creating a vision assistant natively pretrained in Portuguese. **ViTucano** is built on top of the [Tucano series](https://arxiv.org/abs/2411.07854) using the [TinyLLaVA Factory](https://arxiv.org/abs/2405.11788).

### Supported Tasks and Leaderboards

This dataset can be utilized for tasks involving language modeling and visual instruction tunning.

### Languages

Portuguese.

## Dataset Structure

### Data Instances

The dataset consists of the following features:

- **id:** an identifier (name of the respective file) for that image.
- **image:** the path to the file in the original folder configuration.
- **conversations:** a list of dictionaries, where each dictionary represents a message or an entry in a conversation.
- **blip_caption:** the original BLIP caption.
- **url:** the url of the corresponding image.

### Data Fields

```python
{
    "id": "004539375",
    "image": "train/00453/004539375.jpg",
    "conversations": [
        {
            "from": "human",
            "value": "Renderize um resumo claro e conciso da foto.\n<image>"
        },
        {
            "from": "gpt",
            "value": "Selecione móveis de luxo 3 - colchão de espuma de memória de gel de polegada"
        }
    ],
    "blip_caption": "Selecione móveis de luxo 3 - colchão de espuma de memória de gel de polegada",
    "url": "http://ec1.ostkcdn.com/images/products/8111140/P15459545.jpg"
}
```

### Data Splits

Available splits are `train`.

To use this dataset, you will need to download both the `data-pretraining.json` and `images.zip` files available in this folder:

```bash
wget https://huggingface.co/datasets/TucanoBR/ViTucano-Pretrain/resolve/main/data-pretraining.json
wget https://huggingface.co/datasets/TucanoBR/ViTucano-Pretrain/resolve/main/images.zip
```

You can also do this via the `huggingface_hub` library:

```python
from huggingface_hub import snapshot_download

snapshot_download(repo_id="ViTucano-Pretrain", repo_type="dataset")
```

Unzip the images in a way that you get this folder structure (e.g., `unzip images.zip -d "path/to/train"`):

```bash
├── train
    ├── 00000
    ├── 00001
    ├── 00002
    └── etc ...
```

Done! The data is ready to train your projector.

## Dataset Creation

### Curation Rationale

This dataset is a translation of the original [liuhaotian/LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) obtained via Google's translation API.

### Source Data

#### Who are the source language producers?

All text samples translated from English to Portuguese.

### Annotations

#### Annotation process

Read this [dataset card](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) for more information.

#### Who are the annotators?

Read this [dataset card](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) for more information.

### Considerations for Using the Data

**Warning:** This dataset may contain NSFW (Not Safe For Work) content, including explicit images and text captions with offensive/sensitive language.

### Other Known Limitations

This dataset has has been translated using translation engines, potentially resulting in corrupted samples. While useful for quickly converting text between languages, translation engines often struggle with accurately preserving the syntax, semantics, and context of certain languages.

## Additional Information

### Dataset Curators

[Nicholas Kluge Corrêa](mailto:[email protected]).

### Licensing Information

Users of this dataset must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE) and [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).

Creative Commons Attribution 4.0 International; and it should abide by the [policy of OpenAI](https://openai.com/policies/terms-of-use).

### Citation Information

#### ViTucano

```bibtex
@misc{correa20204vitucano,
    author={Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza},
    title={{ViTucano: A Portuguese Vision Assitant}},
    year=2024,
    howpublished = {\url{https://huggingface.co/TucanoBR}},
}
```

#### Tucano

```bibtex
@misc{correa2025vitucano,
    author={Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza},
    title={{ViTucano: A Portuguese Vision Assitant}},
    year=2025,
    howpublished={\url{https://huggingface.co/TucanoBR/ViTucano-2b8-v1}},
    doi={10.57967/hf/4530},
    publisher={{Hugging Face}}
}
```

#### TinyLLaVA Factory

```bibtex
@article{jia2024tinyllava,
  title={TinyLLaVA Factory: A Modularized Codebase for Small-scale Large Multimodal Models},
  author={Jia, Junlong and Hu, Ying and Weng, Xi and Shi, Yiming and Li, Miao and Zhang, Xingjian and Zhou, Baichuan and Liu, Ziyu and Luo, Jie and Huang, Lei and Wu, Ji},
  journal={arXiv preprint arXiv:2405.11788},
  year={2024}
}
```

#### LLaVA

```bibtex
@misc{liu2023llava,
      title={Visual Instruction Tuning}, 
      author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},
      publisher={NeurIPS},
      year={2023},
}
```

### Aknowlegments

We gratefully acknowledge the granted access to the [Marvin cluster](https://www.hpc.uni-bonn.de/en/systems/marvin) hosted by [University of Bonn](https://www.uni-bonn.de/en) along with the support provided by its High Performance Computing \& Analytics Lab.

### Contributions

If you want to contribute, contact me at [[email protected]](mailto:[email protected])!