Updating README with clean_up_tokenization_spaces
Browse files
README.md
CHANGED
@@ -11,14 +11,14 @@ datasets:
|
|
11 |
- HPLT/HPLT2.0_cleaned
|
12 |
---
|
13 |
|
14 |
-
# HPLT
|
15 |
|
16 |
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
|
17 |
|
18 |
This is one of the encoder-only monolingual language models trained as a second release by the [HPLT project](https://hplt-project.org/).
|
19 |
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
|
20 |
|
21 |
-
|
22 |
|
23 |
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
|
24 |
- hidden size: 768
|
@@ -32,7 +32,7 @@ Every model uses its own tokenizer trained on language-specific HPLT data.
|
|
32 |
|
33 |
[The training statistics of all runs](https://api.wandb.ai/links/ltg/kduj7mjn)
|
34 |
|
35 |
-
## Example usage
|
36 |
|
37 |
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
|
38 |
|
@@ -49,7 +49,7 @@ output_p = model(**input_text)
|
|
49 |
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
|
50 |
|
51 |
# should output: '[CLS] It's a beautiful place.[SEP]'
|
52 |
-
print(tokenizer.decode(output_text[0].tolist()))
|
53 |
```
|
54 |
|
55 |
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
|
|
|
11 |
- HPLT/HPLT2.0_cleaned
|
12 |
---
|
13 |
|
14 |
+
# HPLT v2.0 BERT for Spanish
|
15 |
|
16 |
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
|
17 |
|
18 |
This is one of the encoder-only monolingual language models trained as a second release by the [HPLT project](https://hplt-project.org/).
|
19 |
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
|
20 |
|
21 |
+
We present monolingual LTG-BERT models for more than 50 languages out of 191 total in the [HPLT v2.0 dataset](https://hplt-project.org/datasets/v2.0).
|
22 |
|
23 |
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
|
24 |
- hidden size: 768
|
|
|
32 |
|
33 |
[The training statistics of all runs](https://api.wandb.ai/links/ltg/kduj7mjn)
|
34 |
|
35 |
+
## Example usage (tested with `transformers==4.46.1` and `tokenizers==0.20.1`)
|
36 |
|
37 |
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
|
38 |
|
|
|
49 |
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
|
50 |
|
51 |
# should output: '[CLS] It's a beautiful place.[SEP]'
|
52 |
+
print(tokenizer.decode(output_text[0].tolist(), clean_up_tokenization_spaces=True))
|
53 |
```
|
54 |
|
55 |
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
|