Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
3255573
·
1 Parent(s): 6ac4694

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,190 +0,0 @@
1
- ---
2
- license: mit
3
- language:
4
- - en
5
- paperswithcode_id: embedding-data/coco_captions
6
- pretty_name: coco_captions
7
- task_categories:
8
- - sentence-similarity
9
- - paraphrase-mining
10
- task_ids:
11
- - semantic-similarity-classification
12
-
13
- ---
14
-
15
- # Dataset Card for "coco_captions"
16
-
17
- ## Table of Contents
18
-
19
- - [Dataset Description](#dataset-description)
20
- - [Dataset Summary](#dataset-summary)
21
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
22
- - [Languages](#languages)
23
- - [Dataset Structure](#dataset-structure)
24
- - [Data Instances](#data-instances)
25
- - [Data Fields](#data-fields)
26
- - [Data Splits](#data-splits)
27
- - [Dataset Creation](#dataset-creation)
28
- - [Curation Rationale](#curation-rationale)
29
- - [Source Data](#source-data)
30
- - [Annotations](#annotations)
31
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
32
- - [Considerations for Using the Data](#considerations-for-using-the-data)
33
- - [Social Impact of Dataset](#social-impact-of-dataset)
34
- - [Discussion of Biases](#discussion-of-biases)
35
- - [Other Known Limitations](#other-known-limitations)
36
- - [Additional Information](#additional-information)
37
- - [Dataset Curators](#dataset-curators)
38
- - [Licensing Information](#licensing-information)
39
- - [Citation Information](#citation-information)
40
- - [Contributions](#contributions)
41
-
42
- ## Dataset Description
43
- - **Homepage:** [https://cocodataset.org/#home](https://cocodataset.org/#home)
44
- - **Repository:** [https://github.com/cocodataset/cocodataset.github.io](https://github.com/cocodataset/cocodataset.github.io)
45
- - **Paper:** [More Information Needed](https://arxiv.org/abs/1405.0312)
46
- - **Point of Contact:** [[email protected]]([email protected])
47
- - **Size of downloaded dataset files:**
48
- - **Size of the generated dataset:**
49
- - **Total amount of disk used:** 6.32 MB
50
-
51
- ### Dataset Summary
52
-
53
- COCO is a large-scale object detection, segmentation, and captioning dataset. This repo contains five captions per image; useful for sentence similarity tasks.
54
-
55
- Disclaimer: The team releasing COCO did not upload the dataset to the Hub and did not write a dataset card.
56
- These steps were done by the Hugging Face team.
57
-
58
- ### Supported Tasks
59
-
60
- - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
61
-
62
- ### Languages
63
-
64
- - English.
65
-
66
- ## Dataset Structure
67
-
68
- Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value":
69
-
70
- ```
71
- {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
72
- {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
73
- ...
74
- {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
75
- ```
76
-
77
- This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
78
-
79
- ### Usage Example
80
-
81
- Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
82
-
83
- ```python
84
- from datasets import load_dataset
85
- dataset = load_dataset("embedding-data/coco_captions")
86
- ```
87
- The dataset is loaded as a `DatasetDict` and has the format:
88
-
89
- ```python
90
- DatasetDict({
91
- train: Dataset({
92
- features: ['set'],
93
- num_rows: 82783
94
- })
95
- })
96
- ```
97
-
98
- Review an example `i` with:
99
-
100
- ```python
101
- dataset["train"][i]["set"]
102
- ```
103
-
104
-
105
- ### Data Instances
106
-
107
- [More Information Needed](https://cocodataset.org/#format-data)
108
-
109
- ### Data Splits
110
-
111
- [More Information Needed](https://cocodataset.org/#format-data)
112
-
113
- ## Dataset Creation
114
-
115
- ### Curation Rationale
116
-
117
- [More Information Needed](https://cocodataset.org/#home)
118
-
119
- ### Source Data
120
-
121
- #### Initial Data Collection and Normalization
122
-
123
- [More Information Needed](https://cocodataset.org/#home)
124
-
125
- #### Who are the source language producers?
126
-
127
- [More Information Needed](https://cocodataset.org/#home)
128
-
129
- ### Annotations
130
-
131
- #### Annotation process
132
-
133
- [More Information Needed](https://cocodataset.org/#home)
134
-
135
- #### Who are the annotators?
136
-
137
- [More Information Needed](https://cocodataset.org/#home)
138
-
139
- ### Personal and Sensitive Information
140
-
141
- [More Information Needed](https://cocodataset.org/#home)
142
-
143
- ## Considerations for Using the Data
144
-
145
- ### Social Impact of Dataset
146
-
147
- [More Information Needed](https://cocodataset.org/#home)
148
-
149
- ### Discussion of Biases
150
-
151
- [More Information Needed](https://cocodataset.org/#home)
152
-
153
- ### Other Known Limitations
154
-
155
- [More Information Needed](https://cocodataset.org/#home)
156
-
157
- ## Additional Information
158
-
159
- ### Dataset Curators
160
-
161
- [More Information Needed](https://cocodataset.org/#home)
162
-
163
- ### Licensing Information
164
-
165
- The annotations in this dataset along with this website belong to the COCO Consortium
166
- and are licensed under a [Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/legalcode)
167
-
168
- ### Citation Information
169
-
170
- [More Information Needed](https://cocodataset.org/#home)
171
-
172
- ### Contributions
173
-
174
- Thanks to:
175
-
176
- - Tsung-Yi Lin - Google Brain
177
- - Genevieve Patterson - MSR, Trash TV
178
- - Matteo R. - Ronchi Caltech
179
- - Yin Cui - Google
180
- - Michael Maire - TTI-Chicago
181
- - Serge Belongie - Cornell Tech
182
- - Lubomir Bourdev - WaveOne, Inc.
183
- - Ross Girshick - FAIR
184
- - James Hays - Georgia Tech
185
- - Pietro Perona - Caltech
186
- - Deva Ramanan - CMU
187
- - Larry Zitnick - FAIR
188
- - Piotr Dollár - FAIR
189
-
190
- for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
coco_captions.jsonl.gz → embedding-data--coco_captions_quintets/json-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf0a7a50a7a43f4f010690bf3f7365ef0ce98afd0ea5747d04d00c8f3917a5f8
3
- size 6316394
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3568ef79e7d5ed803ab5ab3c0bde7404e933211ed7970922bb1e6294b50ab7cb
3
+ size 11882372