Datasets:
Modalities:
Text
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Portuguese
Size:
10K - 100K
Tags:
lener_br
File size: 1,386 Bytes
264ab9c b4a0581 264ab9c 72ff1de 264ab9c 72ff1de 264ab9c 84ea1c3 264ab9c 5aac0a3 4fb77a1 5aac0a3 4fb77a1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
languages:
- pt
licenses:
- unknown
multilinguality:
- monolingual
paperswithcode_id: lener-br
pretty_name: LeNER-Br language modeling
task_ids:
- language-modeling
datasets:
- lener_br
---
# Dataset Card for "LeNER-Br language modeling"
## Dataset Summary
The LeNER-Br language modeling dataset is a collection of legal texts in Portuguese from the [LeNER-Br](https://huggingface.co/datasets/lener_br) dataset ([official site](https://cic.unb.br/~teodecampos/LeNER-Br/)).
The legal texts were downloaded from this [link](https://cic.unb.br/~teodecampos/LeNER-Br/LeNER-Br.zip) (93.6MB) and processed to create a `DatasetDict` with train and validation dataset (20%).
The LeNER-Br language modeling dataset allows the finetuning of language models as BERTimbau [base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) and [large](https://huggingface.co/neuralmind/bert-large-portuguese-cased).
## Languages
Portuguese from Brazil.
## Dataset structure
```
DatasetDict({
validation: Dataset({
features: ['text'],
num_rows: 3813
})
train: Dataset({
features: ['text'],
num_rows: 15252
})
})
```
## Use
```
!pip install datasets
from datasets import load_dataset
dataset = load_dataset("pierreguillou/lener_br_finetuning_language_model")
``` |