Datasets:
Adding README
Browse files
README.md
CHANGED
@@ -215,6 +215,98 @@ for sample in mc4random:
|
|
215 |
break
|
216 |
```
|
217 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
218 |
### Supported Tasks and Leaderboards
|
219 |
|
220 |
mC4-sampling is mainly intended to pretrain language models and word representations on a budget.
|
|
|
215 |
break
|
216 |
```
|
217 |
|
218 |
+
#### Gaussian
|
219 |
+
|
220 |
+
This sampling method tries to adjust to the underlying distribution while oversampling the central quartiles of the perplexity distribution of the documents in mC4 for a given language. Two parameters control the shape of the approximation, `factor` (peakness of the exponential function) and `width` (spread). Default values are selected for Spanish.
|
221 |
+
|
222 |
+
```python
|
223 |
+
def _should_keep_doc_gaussian(self, doc, factor=None, width=None, boundaries=None, **kwargs):
|
224 |
+
perplexity = self.get_perplexity(doc)
|
225 |
+
width = (9 / 2) if width is None else width
|
226 |
+
factor = 0.78 if factor is None else factor
|
227 |
+
median = 662247.50212365 if boundaries is None else boundaries[1]
|
228 |
+
exponential = np.exp((-1 / width) * ((perplexity - median) / median) ** 2)
|
229 |
+
weighted_perplexity = factor * exponential
|
230 |
+
return self.rng.uniform() < weighted_perplexity
|
231 |
+
```
|
232 |
+
|
233 |
+
In order to use this sampling methods, information about the quartile boundaries of the underlying distribution need to be calculated beforehand and passed in to the instantiation of the dataset. Moreover, the path to a [KenLM model](https://github.com/kpu/kenlm/) (5-gram language model) or an object with a method `.score(text:str) -> float` need to also be passed in for the calculation of the perplexity value of a document. KenLM can be installed with pip:
|
234 |
+
|
235 |
+
```bash
|
236 |
+
pip install https://github.com/kpu/kenlm/archive/master.zip
|
237 |
+
```
|
238 |
+
|
239 |
+
```python
|
240 |
+
from datasets import load_dataset
|
241 |
+
|
242 |
+
mc4gaussian = load_dataset(
|
243 |
+
"bertin-project/mc4-sampling",
|
244 |
+
"es",
|
245 |
+
split="train",
|
246 |
+
streaming=True,
|
247 |
+
sampling_method="gaussian",
|
248 |
+
perplexity_model="./es.arpa.bin",
|
249 |
+
boundaries=[536394.99320948, 662247.50212365, 919250.87225178],
|
250 |
+
factor=0.78,
|
251 |
+
width=9/2,
|
252 |
+
)
|
253 |
+
for sample in mc4gaussian:
|
254 |
+
print(sample)
|
255 |
+
break
|
256 |
+
```
|
257 |
+
|
258 |
+
Facebook has created and released 5-gram Kneser-Ney models for 100 languages available to download and use within the KenLM library. To download your own Kneser-Ney language model, chose a language code from the next list:
|
259 |
+
|
260 |
+
```bash
|
261 |
+
af,ar,az,be,bg,bn,ca,cs,da,de,el,en,es,et,fa,fi,fr,gu,he,hi,hr,hu,hy,id,is,it,ja,ka,kk,km,kn,ko,lt,lv,mk,ml,mn,mr,my,ne,nl,no,pl,pt,ro,ru,uk,zh
|
262 |
+
```
|
263 |
+
|
264 |
+
And run the next download command replacing `lang` with your own language code:
|
265 |
+
|
266 |
+
```bash
|
267 |
+
wget http://dl.fbaipublicfiles.com/cc_net/lm/lang.arpa.bin
|
268 |
+
```
|
269 |
+
|
270 |
+
### Stepwise
|
271 |
+
|
272 |
+
The stepwise sampling method uses a simple criteria by oversampling from the central quartiles inversely proportionally their range. Only `boundaries`, `factor` (strength of the oversampling), and `perplexity_model` are needed:
|
273 |
+
|
274 |
+
```python
|
275 |
+
def _should_keep_doc_step(self, doc, factor=None, boundaries=None, **kwargs):
|
276 |
+
perplexity = self.get_perplexity(doc)
|
277 |
+
factor = 1.5e5 if factor is None else factor
|
278 |
+
if boundaries is None:
|
279 |
+
boundaries = [536394.99320948, 662247.50212365, 919250.87225178]
|
280 |
+
if perplexity <= boundaries[0]:
|
281 |
+
quartile_range = boundaries[0]
|
282 |
+
elif boundaries[0] < perplexity < boundaries[1]:
|
283 |
+
quartile_range = boundaries[1] - boundaries[0]
|
284 |
+
elif boundaries[1] < perplexity < boundaries[2]:
|
285 |
+
quartile_range = boundaries[2] - boundaries[1]
|
286 |
+
elif perplexity >= boundaries[2]:
|
287 |
+
quartile_range = 10 * boundaries[2]
|
288 |
+
probability = factor / quartile_range
|
289 |
+
return self.rng.uniform() < probability
|
290 |
+
```
|
291 |
+
|
292 |
+
In order to use this sampling method, a similar invocation is needed:
|
293 |
+
|
294 |
+
```python
|
295 |
+
mc4stepwsie = load_dataset(
|
296 |
+
"bertin-project/mc4-sampling",
|
297 |
+
"es",
|
298 |
+
split="train",
|
299 |
+
streaming=True,
|
300 |
+
sampling_method="stepwise",
|
301 |
+
perplexity_model="./es.arpa.bin",
|
302 |
+
boundaries=[536394.99320948, 662247.50212365, 919250.87225178],
|
303 |
+
factor=1.5e5,
|
304 |
+
)
|
305 |
+
for sample in mc4stepwsie:
|
306 |
+
print(sample)
|
307 |
+
break
|
308 |
+
```
|
309 |
+
|
310 |
### Supported Tasks and Leaderboards
|
311 |
|
312 |
mC4-sampling is mainly intended to pretrain language models and word representations on a budget.
|