modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
V3RX2000/distilbert-base-uncased-finetuned-squad
|
V3RX2000
| 2021-10-12T04:47:10Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2246 | 1.0 | 5533 | 1.1484 |
| 0.9433 | 2.0 | 11066 | 1.1294 |
| 0.7625 | 3.0 | 16599 | 1.1580 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
V3RX2000/distilbert-base-uncased-finetuned-cola
|
V3RX2000
| 2021-10-12T02:10:11Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5396261051709696
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8107
- Matthews Correlation: 0.5396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5261 | 1.0 | 535 | 0.5509 | 0.3827 |
| 0.3498 | 2.0 | 1070 | 0.4936 | 0.5295 |
| 0.2369 | 3.0 | 1605 | 0.6505 | 0.5248 |
| 0.1637 | 4.0 | 2140 | 0.8107 | 0.5396 |
| 0.1299 | 5.0 | 2675 | 0.8738 | 0.5387 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
lighteternal/stsb-xlm-r-greek-transfer
|
lighteternal
| 2021-10-11T21:16:05Z
| 184
| 6
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"el",
"arxiv:2004.09813",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z
|
---
language:
- en
- el
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
widget:
- source_sentence: "Το κινητό έπεσε και έσπασε."
sentences: [
"H πτώση κατέστρεψε τη συσκευή.",
"Το αυτοκίνητο έσπασε στα δυο.",
"Ο υπουργός έπεσε και έσπασε το πόδι του."
]
pipeline_tag: sentence-similarity
license: apache-2.0
---
# Semantic Textual Similarity for the Greek language using Transformers and Transfer Learning
### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
We follow a Teacher-Student transfer learning approach described [here](https://www.sbert.net/examples/training/multilingual/README.html) to train an XLM-Roberta-base model on STS using parallel EN-EL sentence pairs.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('{MODEL_NAME}')
sentences1 = ['Το κινητό έπεσε και έσπασε.',
'Το κινητό έπεσε και έσπασε.',
'Το κινητό έπεσε και έσπασε.']
sentences2 = ["H πτώση κατέστρεψε τη συσκευή.",
"Το αυτοκίνητο έσπασε στα δυο.",
"Ο υπουργός έπεσε και έσπασε το πόδι του."]
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)
#Compute cosine-similarities (clone repo for util functions)
from sentence_transformers import util
cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2)
#Output the pairs with their score
for i in range(len(sentences1)):
print("{} {} Score: {:.4f}".format(sentences1[i], sentences2[i], cosine_scores[i][i]))
#Outputs:
#Το κινητό έπεσε και έσπασε. H πτώση κατέστρεψε τη συσκευή. Score: 0.6741
#Το κινητό έπεσε και έσπασε. Το αυτοκίνητο έσπασε στα δυο. Score: 0.5067
#Το κινητό έπεσε και έσπασε. Ο υπουργός έπεσε και έσπασε το πόδι του. Score: 0.4548
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained(
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
#### Similarity Evaluation on STS.en-el.txt (translated manually for evaluation purposes)
We measure the semantic textual similarity (STS) between sentence pairs in different languages:
| cosine_pearson | cosine_spearman | euclidean_pearson | euclidean_spearman | manhattan_pearson | manhattan_spearman | dot_pearson | dot_spearman |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
0.834474802920369 | 0.845687403828107 | 0.815895882192263 | 0.81084300966291 | 0.816333562677654 | 0.813879742416394 | 0.7945167996031 | 0.802604238383742 |
#### Translation
We measure the translation accuracy. Given a list with source sentences, for example, 1000 English sentences. And a list with matching target (translated) sentences, for example, 1000 Greek sentences. For each sentence pair, we check if their embeddings are the closest using cosine similarity. I.e., for each src_sentences[i] we check if trg_sentences[i] has the highest similarity out of all target sentences. If this is the case, we have a hit, otherwise an error. This evaluator reports accuracy (higher = better).
| src2trg | trg2src |
| ----------- | ----------- |
| 0.981 | 0.9775 |
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 135121 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 400, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
## Citing & Authors
Citation info for Greek model: TBD
Based on the transfer learning approach of [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)
|
ismaelardo/BETO_3d
|
ismaelardo
| 2021-10-11T18:50:46Z
| 11
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
Este es el primer modelo de prueba BETO_3D
|
lincoln/barthez-squadFR-fquad-piaf-question-generation
|
lincoln
| 2021-10-11T15:24:58Z
| 425
| 4
|
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"seq2seq",
"barthez",
"fr",
"dataset:squadFR",
"dataset:fquad",
"dataset:piaf",
"arxiv:2010.12321",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z
|
---
language:
- fr
license: mit
pipeline_tag: "text2text-generation"
datasets:
- squadFR
- fquad
- piaf
metrics:
- bleu
- rouge
widget:
- text: "La science des données est un domaine interdisciplinaire qui utilise des méthodes, des processus, des algorithmes et des systèmes scientifiques pour extraire des connaissances et des idées de nombreuses données structurelles et non structurées.\
Elle est souvent associée aux <hl>données massives et à l'analyse des données<hl>."
tags:
- seq2seq
- barthez
---
# Génération de question à partir d'un contexte
Le modèle est _fine tuné_ à partir du modèle [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) afin de générer des questions à partir d'un paragraphe et d'une suite de token. La suite de token représente la réponse sur laquelle la question est basée.
Input: _Les projecteurs peuvent être utilisées pour \<hl\>illuminer\<hl\> des terrains de jeu extérieurs_
Output: _À quoi servent les projecteurs sur les terrains de jeu extérieurs?_
## Données d'apprentissage
La base d'entrainement est la concatenation des bases SquadFR, [fquad](https://huggingface.co/datasets/fquad), [piaf](https://huggingface.co/datasets/piaf). L'input est le context et nous avons entouré à l'aide du token spécial **\<hl\>** les réponses.
Volumétrie (nombre de triplet contexte/réponse/question):
* train: 98 211
* test: 12 277
* valid: 12 776
## Entrainement
L'apprentissage s'est effectué sur une carte Tesla V100.
* Batch size: 20
* Weight decay: 0.01
* Learning rate: 3x10-5 (décroit linéairement)
* < 24h d'entrainement
* Paramètres par défaut de la classe [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments)
* Total steps: 56 000
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAj0AAAGOCAYAAAB8J7JHAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAEKXSURBVHhe7d1/sB11fcf/zHSmihVDKagFpqQDrVUcQptaf9Q2acf6ox1M1NKpVkpGWrRaJ3FqW6f/JPX7HR0pNSkWKUMRqEz6bRSj8kMRh8QREWyQYtFCgw0Uwy8hKYIKWrvf+9y77+Rz9+45d2/u+bHnfJ6Pmc/cuz/Ont09e3Zf57Of3V1WSJIkZcDQI0mSsmDokSRJWTD0SJKkLBh6JElSFgw9kiQpC4YeSZKUBUOPJEnKgqFHkiRlwdAjSZKyYOiRJElZMPRIkqQsGHokSVIWDD2SJCkLhh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JGm0I9+9KPioosuKu64446qz9Lt3Lmz2Lx5c/l31B5//PHi0ksvLd+fMizDnn4bXZiHQZmmZdF0MPRo4nEQXrZs2VgOxl31gx/8oFwnf//3f1/1aS/CTd041/OqVauKFStWFGvWrCnLsAx7+qlNmzYVe/furboO4f2nJSiwvRh61CWGHk08Q898Swk9HIx5bV2EoVGv50996lPl/Hz5y1+u+gzPKENPr212mkIPy2HoUZcYejTxFht6brvttuLAgQNVVzPG2bVrV9U1H7/QGd70S30hgwwNzGeTYYSehfSalzrWWdtxwUFzoflpOz3G6zduPfTwWX3rW9+quuZjO2I7WGh7Ckwvtpm2oYdxHn744aqrGdM8nG2xrcVMP11GqWsMPZp47GTbhJ44mEc57bTT5u2cN2zYUBx11FFzxjvrrLOqobMHTU6zpMPpXggHxXXr1s15Hd1xsGQ+6Ee7lbqY7xiXv+vXr58zLQ6U6YG3KfSsXr26LHX04z1QX0dR0Gs9s87ScZvWK/0ZL10HrOcdO3ZUYzRjuWL8KPH+vAfvlQ6r1yrEPKefW9M6CBF6tm3bNudzfte73lWNMatpO2Be6oEq1nm6Xtme0tdFieWK0EM5+uijDw4///zzy+Eptpd0e+X/9PPp9XlS4jPvh2nVp1//zJgOy5jOS0yb/+ufiTROhh5NvDiwpTv7utj5b9mypQwHHJxWrlxZHrgiLNAvxgn0S3fyy5cvLw9a8RoOvE1Bpe7EE08s3y/mkb/0IwQEDhwcOOsYLw1e/M98xPsyf3RzsAyHG3oQ66quaT3X1yvDYr2mGIcDIuOzziiMR780rNUxPQ6a8b5RkK5TpsE8MF66LAyjH+OyvhivHshSEXpOOeWU4pZbbikeffTRg++fHrzjc495T7enFOuWZUznM94/lqmO9z/99NOLjRs3ltNlHPr9xE/8xJx55/2ZRqx7SgSqGI+/vD4tzBPjMO1+4vvANJkOJabPdALrO5Yxvivx/oxr6FGXGHo08dgB13fEdRFWUrFTj5ATB/A4kDVh+EK1E3VxcIoDQWA6af+m8WLZ0oMJ3emBHfHaGG8UoYf1RHd9vcZ4zFOgu/7e9en1EqEjFQGn/to4KId4j/r66oVwwfgf+chHqj6zTj311OK4444rnnzyyarPfLE9pfMUAaP+2aM+bmAenvvc5855zfbt28vx+RvYpqk9qyPgNfVHbCfpZ9PL2rVry/eoY/oMC7G99FpGQ4+6xNCjiRcHtqYDCJoORiENQxEo+LW+devWxl/CUTvBr/BPfvKTVd/+4pQZO/+0MI10vggRzE96gGbeOMiEfstK/3jtKEJP23lBvRsRmppen2JdMV5qEPPYJEJPvQ3NO97xjrL/N7/5zarP7LT5DHlNFMZJQ3GvdY5e88V0OH1Zx/isC8S2ynjpNkVh+216z1gX6efw3e9+t7j++uvnFLYdsK2n4SbU1z3dTeEI6TxLXWDo0cRb6MDWb3j9oETQIWiwE+c1HEDSgxgHakIMQYThbXbqTJ/pxXvVSxqueO84RcJ7EZbSX+1Rw8GwOvrHAW3coad+wEznLdXr9SnWb31+mOemU4G95rGtCC919enG58DnxXKxjcQ46XL2WudIp5fi/Zu2KcaP/vFerOd4j7TUa3rYxtiWIuCHPXv2lNNJy/79+8th/N/0mdW3D7p5zyaMZ+hRlxh6NPHqB6S6+FWchpdA/16nAjhQRM1OE6YbB4B+pwuipqeNdFniVER62qDXskatSRykFhN66rVL9YNaqL93dPdar+k0690hnV4vTaGn1zwyL+k0Yx7bitqaXjU99957b9nddAqJ7YVx0uXstc6RzmeqTeiJbbrNaSq2DYI023Ld//3f/xU//OEP55TQa95Z7rRmh+Xtt4yGHnWJoUcTLw5s/Q6eHKTqv+Djdf0OHHEQTYNHHcObDughptEUDpowr/wi50BSP1AxHxxw6r/Yo+Yhao2aQg/zGLVIIdZBOv8Rtuqa1nPTvMTr0+Wtv0eoT69JU+iJdVr/7Fhn6QE55rmtCD1NbXq4QWJgnPry0F3v3ys4gHGjPVmqTehB0zZdR+ChRoztqKl2sB8+V8J6+jr+r9cYsbyGHk0KQ48mXhzYmto3xA43DpJcLcV9VWizw847DRUcgJgGbXUYh79xwADvw0GG1zKcEpdg9wtF4CDBeLQBidcynaZTNBxEmDfGbwpkcXCNabGMdKc1D02hJ9ZTug44cFLSA3XUIjBeug7j9fwNEXDq67V+EGSc9D1CfXpNYvnqeA/eKz6P+CzSsBXz3BafL+U5z3lOeYn4VVddVZx55pnlNHisR4hAcNlll5XvzWfBdsJ46XL2Cz2MTwiNdRzbUNvQE8vG+DEf/GUbjnlgm2Cc9LOMstB6J+AQINlGmW58H+iXbu+8l6FHk8LQo4lH7UYcXJpK4GBINzttDjgcENJfsRwE4ooVdtZR4xLjsKOnOw5ujMf4Cx08AqGK9+e1FP5vCgK8T8x7On8pwkZMi/mpT4fTFL/+679efOITn6j6zOJ1Mf+8nnXHeqiHq1gXMR/RjwNsfXljvca8pOErMLwpwMU89MNBk/dtwnuly5MGHsS20RbvQ/n6179enHHGGeV0qeW55JJLqjFm8bmwLcS2wrqKzy1dTuavaX2A8RnGa9L1wPs3BYWm/kyD92ZbZT5im41ppdOvl6bPoy6dfmzv9c+L6fRaxl7LIo2LoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPZIkKQuGHkmSlAVDT4Mbb7yxOPLII4uXvvSlFovFYrFkUZ71rGfNuaHpNDL0NLj11lvLu61ec801FovFYrFkUV74wheWd9+eZoaeBv/xH/9R3oZekqRccBd3HjcyzQw9DQw9kqTcGHoyZeiRJOXG0JMpQ48kKTeGnkwZeiRp8b773e9aOlwWYujJlKFHktr54Q9/WOzbt6+48847i2984xuWDpf//M//LB544IHqk5vP0JMpQ48ktfPII48Ud911V/Hoo48WP/jBD8oQZOleeeqpp8rPivDz+OOPV5/eXIaeTBl6JKmde+65p7j//vurLnXd3r17e35ehp5MGXokqR1qeQ4cOFB1qeseeuihMvg0MfRkytAjSe0YeiYLoYfauSaGnkwZeiSpnUkLPRz0//Vf/7Xqyo+hR/MMOvRceunMip5Z02vXVj0kaUpMWuj56Ec/WrzgBS+ouvJj6NE8gw49O3fOhp7Vq6sekjQlDD2TxdCjeQw9ktQO9+eZ9NBDw97NmzeXZSc77JpLL7304HCeQp4ub31Y1xl6NI+hR5LamfTQs2PHjuKoo44qNmzYUGzatKlYvnx5+X/g/5UrV5bDKGvXrj0YjFbP7NTrw7rO0KN5Bh16brttNvSsWFH1kKQp0XR666UvHW15zWuqN26hHnpOPPHEMrCE22Z22Mtmdtj8Bf831f6g37CuMvRonkGHHhB6KJI0TZpCz5FHHtrnjaIcbuhhvtOAE6i92bJlS/k/tTkrZn6xbt26tXG80047rXFYVxl6NI+hR5LaaQo9t9xSFDfdNLryjW9Ub9xCGnqiVqc+/wSdqP1hGAGIU1ec+iLkxPgxjPGZzpo1a+ZNq2sMPZrH0CNJ7Uz61VuEFdr1pAg39X5gOeunw0IMixqirjL0aJ5hhp4J2jdI0oImPfScddZZc2pv6Ca8RDdXZ8X/XOXFqS76oWlYU1jqEkOP5hlG6OHKLULPhLV5k6S+Ji30XHnllXNCD/POFVrU+FA4VZW2z4lTV1HSK7to0xP961d9dZWhR/MYeiSpnUkLPbkz9GgeQ48ktWPomSyGHs1j6JGkdgw9k8XQo3mGEXo41UvoaWj0L0kTy9AzWQw9E4DLA2ldHw3GFoMW9dxifDGvG0boIewYeiRNG0PPZCH0cFxsYujpCC4LpHD/g8WGHlreRwv7tgw9ktSOoWeyGHomCM84WUx4iTtlUlNk6JGkwTP0TBZDzwRZTOjhQ+UGU/ztQujhXlbMwgQ8hFeSWjP0TBZDzwRZTOihhiduB75Q6Ln++uuLZz/72QfLT/3UT5U3mhokrtpiFriKS5KmhaFnshh6Jkjb0BOntcJCoeeJJ54o7rzzzoPl2muvLcPPIBl6JE0jQ89crIvf/d3fLfbv31/16Y3xPvGJT1Rdo2HomSBtQw+Bh2eg8MRbCv/zOv5v8/j/YZzeMvRImkaGnrnuv//+8nizb9++qk9vr3zlK4v3ve99VddoEHq8ZH1CtA09XOlF7U4UQhCv4/9eCTc1jNDD2zLrM/lLkqaGoWcuQ0+3TUToIexs3ry5WL9+fbkx8T8l7Nq1q3j6059e3HvvvVWfuRY6vVU3jNADZmERsyFJnTeJoYdjCjX/HBc4E7B169ZqyOxT1utPSmf8tdVVKPxwXrduXflaCveQY3hYSuhJp818bdy4sRoyi2NZ3HeOv+kDTvsNSxl6JgA1N9TW1Ev493//9+J1r3td8fDDD1d95orXt2XokaR2GkPPrbcWxVe+MroyMw9t0cSBYMBxAQQWrvSNbsICgShF4IkQQTBJQxFtSLnwJdbB4YYeXk/QIXTxf8wXYQbMN+8TASvGQYwbZzLSYXWGHs1j6JGkdhpDz5FHHtrhjaK85jXVGy8sDTAhvfiF4EBoSQME3U3tQRmHMw2ElQhChxt6CF0ElxT9qLUBIYb3iflK8d4Mm/c5NDD0aJ5hhR6uguf72WK7lKSJ0Bh6XvSi0ZZFhB7u0E9AiAtdKJyiol9gnKhhIRDRHVhWXkOtS5x14P8Y/3BDD6+vn5Eg6DCtWL8ENrqZX5p4RH/+8lqGMW/psDpDj+YZVuhheyb09Kh1lKSJM2lteggHcQ+3XqhhiRBE4EnHp5Yo2veENCQNI/SkqOlh/hg3DWroNywYejSPoUeS2pm00ENooaakH5aHsEHY4W+6fASKCDggaDDOUkNPtDVKT18xzfoprxDvm44fmsJSMPRoHkOPJLXDDV0nKfQwr9TMxGkgClcG12t/aFBMcKjX6kQ7G17HVV9MK21wfLihB7xXnLriyi2mE22FmD7zybDLLrusvMorTrvFMOaHwrLFsDpDj+Yx9EhSO5MWegIhh1ofCv/Xa0yoeSFMNDVgJvgQihjO6+iOq6W4w/+5555bPP7442V3P9u3by9uvvnmqmsW02Ke6u/N+9TnOdZ7v2F1hh7NM6zQM7MNl6GHv5I0DSY19OTK0KN5DD2S1I6hp7cbbrih+PjHP95Y2tQEDYOhR/MYeiSpHUNPb29961t7lgcffLAaa7QMPZrH0CNJ7Rh6JouhR/MMK/TQCJ/QU7sYQJImlqFnshh6NM+wQg+N+wk9XMUlSdNgz549xbe//e2qS133rW99q+fDuQ09mTL0SFI7tE35r//6r+J///d/qz7qqqeeeqo8vu3fv7/qM5ehJ1OGHklqh/vSsM/8xje+UdYgWLpZOKXFZ8QdtH/4wx9Wn95chp5MDSv0cO8rQk+PR6JI0kT60Y9+VDz22GPFAw880Fi4S/GgStP0R1ma5mmUpWme2paFLpM39GRqWKEHhB6KJEldYujJlKFHkpQbQ0+mDD2SpNwYejI1zNCzfPls6PG2FpKkLjH0ZGqYoccnrUuSusjQkylDjyQpN4aeTBl6JEm5MfRkytAjScqNoSdTwww9PmldktRFhp5MGXokSbkx9GTK0CNJyo2hJ1OGHklSbgw9mRpm6NmxYzb0rF1b9ZAkqQMMPZkaZujhqi1CD1dxSZLUFYaeTBl6JEm5MfRkytAjScqNoSdTwww9e/fOhp4VK6oekiR1gKEnU8MMPSD0UCRJ6gpDT6YMPZKk3Bh6MmXokSTlxtCTqWGHnuXLZ0PPgQNVD0mSxszQ0xEHZtLBrl27is2bN5eljSuvvLI4++yzi9e97nXFhRdeWNx9993VkIUNO/T4pHVJUtcYejpi9UxKWL58ebFy5cqZsLDwLBOMTj755GL9+vXFxo0bi9NOO6183Y033liN0Z+hR5KUG0NPR1DTg50zKaFN6Ln22mur/2bdeuutxdOe9rTibW97W9WnP0OPJCk3hp6OaRt6mpx66qllzU8bhh5JUm4MPR1zuKHnuuuuK193ySWXVH3muu+++4pLL730YHnf+95XHHPMMdXQwfNJ65KkrjH0dMzhhp4Xv/jFfRtA7969u1i7du3B8pu/+ZvFT/7kT1ZDB8/QI0nqGkNPxxxO6DnllFNaX/EVhn16y9AjSeoaQ0/HLCb0PPjgg8WP//iPF6961auqPu0ZeiRJuTH0dEyv0EObnL/7u78rvvOd75TdXK11wgknFG95y1vK7sUadujZsWM29KxdW/WQJGnMDD0dsWnTpjLs1Eu44YYbyu577rmn7OZ0Vn1cypo1a8rhCxl26JnJbjPzM3sVlyRJXWDo6Yi9e/eWtTz1Eh577LHipptuKp588smyu2ncKG0YeiRJuTH0ZMrQI0nKjaEnU8MOPXv3zoaeFSuqHpIkjZmhJ1PDDj0g9FAkSeoCQ0+mDD2SpNwYejJl6JEk5cbQk6lRhJ7ly2dDT/UAeUmSxsrQk6lRhB6ftC5J6hJDT6YMPZKk3Bh6MmXokSTlxtCTKUOPJCk3hp5MjSL0+KR1SVKXGHoyZeiRJOXG0JMpQ48kKTeGnkwZeiRJuTH0ZGoUoWfHjtnQs3Zt1UOSpDEy9GRqFKGHq7YIPVzFJUnSuBl6MmXokSTlxtCTKUOPJCk3hp5MjSL07N07G3pWrKh6SJI0RoaeTI0i9IDQQ5EkadwMPZky9EiScmPoyZShR5KUG0NPpkYVepYvnw09Bw5UPSRJGhNDT6ZGFXp80rokqSsMPZky9EiScmPoyZShR5KUG0NPpgw9kqTcGHoyNarQ45PWJUldYejJlKFHkpQbQ0+mDD2SpNwYejJl6JEk5cbQk6lRhZ4dO2ZDz9q1VQ9JksbE0JOpUYUertoi9HAVlyRJ42ToyZShR5KUG0NPpgw9kqTcGHqmAAHm+9//ftXVzqhCz969s6FnxYqqhyRJY2Lo6YhLL720WLNmTXHUUUfNhIR2s3zHHXcUL3nJS8rxjzzyyGLz5s3VkIWNKvSAxWm5SJIkDY2hpyM2bdpUli1btrQOPS960YuKM844o9i3b19x8803l8Hnoosuqob2Z+iRJOXG0NMxO3fubBV6br/99nK82267repTFO985zuLVatWVV39GXokSbkx9HRM29Czffv24ogjjqi6ZsVrqflZyChDz/Lls6HnwIGqhyRJY2Do6Zi2oeeDH/xgceqpp1Zds+K1u3fvrvoc8oUvfKF4/vOff7CcdNJJZfuhUfBJ65KkLjD0dEzb0EPbn16h59/+7d+qPofs37+/+NKXvnSwbNu2rTj22GOrocNl6JEkdYGhp2Pahp4dO3b0PL317W9/u+rT2yhPbxl6JEldYOjpmLah57777ivH47L1wCXrr3jFK6qu/gw9kqTcGHo6Yu/evcWuXbuKrVu3lmGG/ymBU1LHH398GXbCS1/60vKSdV7L8Gc+85nFJZdcUg3tb5ShxyetS5K6wNDTEdyccPXq1fNK+OpXv1r82q/9WvHggw9WfWZvTviyl72sDDs0TO7qzQkNPZKkLjD0ZMrQI0nKjaFnSGibc6DDN6Yx9EiScmPoGQBOTXEJeVi5cmXZLof74BB+umiUoWfHjtnQs3Zt1UOSpDEw9AzA2pmjOZeQg7/Lly8va3kIQgzrolGGHnIfoSdpoiRJ0sgZegaABsdRo7Nhw4birLPOKv8n+Jx44onl/11j6JEk5cbQMwCEHJ6QDkIOp7vAw0ANPYYeSVI3GHoGgPvkcEqLdjy054kGzJ7emjWzesrQs2JF1UOSpDEw9AwIQad+xRY1PQSiLhpl6AGhhyJJ0rgYeoaA4MPdlLsaeGDokSTlxtAzAJzGisbLiEvWKXFVV9eMOvQsXz4beqrmTpIkjZyhZwC4estL1vtj9RB6jjqKmrCqpyRJI2ToGYB+l6wTgLpo1KEHXL1F8Fm3ruohSdIIGXoGIC5ZJ+SsWLHCS9Z7oIlTnObq6Fk/SdIUM/QMgJestzezSjzNJUkaC0PPANWfs+Ul683iNFfS9luSpKEz9AwYwYfL1aO2p6vGGXrS01y1nChJ0tAYegaENj08VT0uVaesX7++Gto94ww94KkdhB7u0uxpLknSKBh6BiAuU48GzKDGh/Y9XM3VReMOPZhZPWXw6egqkiRNGUPPAHD1Vhp4AsHntNNOq7q6pQuh57bbZkMPxdNckqRhM/QMAFdoNYUeGjJT29NFXQg9SE9zSZI0TIaeASDwcH8eQk7gqi1qeTy9tbA4zUXwscZHkjQshp4BIdxEA+Zo0Mydmrt6FVeXQg9ZkXs4Enwo3LG5o1f6S5ImmKFngKjdoVEzJa316aIuhZ7Aqa64lJ2bF27dWg2QJGkADD1DRENmanu6qIuhB9TwcBPrqPWhHbinvCRJg2DoGSJDz+Ej6KSnvDZurAZIknSYDD1DZOhZGppDxdVdlDVrvJGhJOnwGXqGyNAzGDSPirY+nO7qeHMpSVJHGXqWgFDDc7Z6la1bt+YTeqiC2bx59rzUEDD5uLSdRs5DehtJ0hQz9CxBXKLer2QTeuI8FOeghoTgw5PZeRtKw/0gJUnqydCTqaHU9ETL4yGnkbSdT4ef6SqpLfYfHIiGWFsswdCTqaG06SHskERG8EwJ3ira+QyxcknSsNA4j5tx8QWOXzFRuEMpQUgaMENPpobWkHlEtT1IGzhv2VL1lNRdfGmpnuWHURpyKDQF4LE96R1Kd+yoXigNhqEnU0MLPeykYoc1gl9q1ISP8O0kHS4CD1/UCDn8QKKRHvuM9MvLHUoJQDEeNUE+l0YDYujJ1NBCD2KHReObERjx20laLEJN1O7whW1z34n0HDZhyefSaAAMPZkaaugZcfVL+nb+IJQ6hn0AN9jiS8p9JxazT2Dc9Lk0BKfLLhvJfkXTydCTqaGGHoy4+iUuZeevpI5YSuBJ8csmfS4Nv3BoG+SVXlokQ0+H3H777cUll1xSXHTRRcX1119f9e3tO9/5zsx3fmdx3nnnFdu2bSvuuuuuasjChh56qL6OHdQIql94ixG+naQ2CCZ8KTlNNYjaGU55pe19KNT+cOrL2h+1YOjpiB07dpQhZP3MTuKcc86Z+S4vK7Zv314NbfaqV72qePWrX138+Z//eXHmmWeWryEEtTH00IMRV7/E23kJu7RE/HLYtavqOExp4GnThmcxmD+u9EprfyjW/GgBhp6OWDNzpN7Mjbkq/L9q1aqqa77HHnts5ju+rLjjjjuqPkXx2te+tpxOGyMJPSOufuGHXrR7dN8nHQa+pxFWKNSibNy4+NBCIOH1wwg8dVz9FbU/tP+R+jD0dMCTTz45831dVtb2hL0zOx/6XX311VWfuR599NFyeFobRM1Pp0IPYuc3ouqXETwNQ5o+/GJIww6lXotC2xxOI/X6AUO4oXaIceI1yT5tqJj/eM8R/MDS5DL0dMDdd989811dVuzZs6fqM4t+tO/pZcuWLcUb3/jG4mMf+1jx3ve+t3j5y19e3HDDDdXQuR566KHi05/+9MHy4Q9/uDjmmGOqoUM04uqX9O1Gtb+VJhZfGGqY0/vncJ44ggNBJr1hYBQCEL8smm4yGGUENyidI67y8k6l6sPQ0wG0wyHg1NEvPeVVd8UVVxTHH3/8zP7ntOLII48szj777DLcNLnpppvKUBSFU2dHsaMbhah+YUfJTnbI2NfyduyPJfXQL+w04VdENJxrKlydxWkmwsc4fnHwnsyHX3z1YejpgDiV1VTTc/nll1ddc9028wuM4buqxoYPP/xw8eY3v7l4wxveUHYvZGSnt5BWv7CTHUGNzwifhiFNnvhlQCGoLOY7yfeZ8SnDbq+zWPHFH8E+RpPJ0NMRBJhrrrmm6iqKffv2lf0+//nPV33mogao3n4naozaXME10tADfkGml5rSOHKItT7xo4+MNYLKJWmysO/gCzJtp4KiDeGIrhjV5DH0dMQZZ5wx51QW/3MaKjz44IPl6awnnnii7P7Upz41891eVuzevbvsRlzq/u1vf7vq09vIQ0+IU10UqqGH+IsszVjs41m9bOu2c1TW+M7xpaD2ddp+EfDlZtn8taMeDD0dcdVVVxXHHXdcGX7injvplVk0UKbfPffcU/Upil/5lV8pVq5cOfPjZkOxbt26mX3Y8uIDH/hANbS/sYUeUCXO+X92TpQk7A0Sb1O/ACUK+8SZVVbe0V7KSjT4ndaH1cWvHc9tq4Ghp0O+9rWvFRdccEFx/vnnF9ddd13Vd9YjjzxSBqPvfe97VZ9Zn/3sZ4tzzz23uPjii+fU+ixkrKEnpLU+NHIeUq1PNEHg7dgf1i9EocLJ8KMsRE0IZVprQqK9EvsUqcbQk6lOhB6QRtLqGM5DDfGUV2Dfz74xfWvDj6ZeXH01zW1eCHPxy8Zz2aox9GSqM6EH7KSohkmrYEYUfmD4URb4nsUl6tMeBiLc0bBZShh6MtWp0BM6GH54+yi0AaL5URS+N9N6hkBTKE4nc4532tGgj2Ul5EkJQ0+mOhl6QlP44fx8v1vgD1A9/CxUmDWuwF/q8xmloYpanhH9iBi7+BJ7a3YlDD2Z6nToCU3hJ1LGCAIQx4a0sO9kdqKkl8SnhVohZk/qjGjcy1WTueAeRCzzYh9Cyn6FXzAUDo5U6/qLZmoYejI1EaEnReLgPH09AMV152Nso0AgoulAehU+xR+Y6gzO1bJREn5ywY+m+DL22j/QnwetRi3YQoVfNIagiWboydTEhZ4UO+6410ha4jzTGDfoqJxidjjO2OZHY0cqZ4PkdE9uej2ElC8m4SX2HWlhPVGNS+H1Tb9ookQI4ofXUoMQ88Q0mBbTZF8WDQqjsI9L35txGL9rjwPpMENPpiY69AR2EhGA6jVAsVPgPNMYdgixj/TiEY0d3wM2xhyfPh7Po+EXSCBQpDU71CC3qSlmf8P0+oUgSlwBQSDhvfoVapni8xlEqb9vTL9eRtQ+sosMPZmaitBTxy9aqlmadkjs5NgB8KtoBNUvcfEIxR9hGpvYEPlRMILtvpOiQTMhIE7zUajJYZ9xuCIEsc/hh1e/INS2RO1SNBxk/tISOxPem+5478VceZGWhdpH8j7UPrHuaEqQBif2pxGuokSNV4e3NUNPpqYy9KT40lELxK+4ph1CesnVkH7x8IMw3ko6bEvZRr1fzaEvYhT2BwSGYeGzikCyUGEfxbiDCAlpEKpPPy292kdGDRDhhf/TgHi4hZ0f06qHI96H7TotIwpKhp5MTX3oqeMXEtX7/Cpq+nJS+HLGzXj4UiwxDPEdjryV45kFHQa2U34tE8jT9hvpNhq/sBcKQxzg4nVDCvYTgWVnHfBlJATokAhAsZ00FWqfCI71AEV3GrAo7F8Zv6m5wWIK23m6rfN+A2LoyVR2oaeOLxFfUr6g/aqGF6r+XUA0KeDsWs7HnYlGeh3GL1I2CHa+7NTZwde3vSicNlnKQYSDWu788vXHNk2IiVNr7LgGsc4iIKXBiEKIYt+bln7bOK8ZEENPprIPPU3SXy9NX8LDDEBRuUQlkjqMHX/a8DP97OuFqv+oFVwoCLG9ME4acHpdIk0AZ4OharD+65b3oB/D4qDR70BBWGIcD/iaJLGdp/vj+ndhCQw9mTL0tNTr/Hecq+7XmK8qD/x/u4rf/oldxeplu4ov/r+H+jcdjDw+jQmnlXq1YeCzj1+jlKZxKG3v9RIlphs79X7BSdJAGHoyZeg5DL0C0ADKY0evKP71mWuKncvWFFuWbSw2/uwni/e+ZW9ZOeCxcMj4XCOwUDsSvzL7ISTxKzRqXGqf55yShiYCzqBOHUhaNENPpgw9SxQHRk41cCBLC8EoDnJJ2f3M1cWuZauLPcevLh5ftbp48ukLh6cDy44qdixbV1z43M3F+1+9q/j0e28r9vxjVVMUp0uicLql6d4gac2TB9u5WD+xvvnclpIwTadS5xl6MmXoGT0qB5I8c7BQEfBnb9hbfPH/qYLUTHA6sHJ18f2nDb5G6WCJG6jF6bn6JaTTHo4IKCx7rA8vr5OyYOjJlKFnPOKWIQQdKhY409EX4WNmpPvesKG476TVxcPPOPFgjRFl87JNB8u6ZTuK9x69pfjkaZuKf1s3E5w2NNQ8He6puWjDVC9LqVEieKRBKy3slGptowaGeYvLwVkfC53KkjQ1DD2ZMvSMz6COsUwnbVbSlGeo0KFCg/ww5+wLB/6YAMGofglpv8v4u1Da3jitKayl7XeofpOUDUNPpgw904ljOGdquOq5HoI41h/2MZ4XEpLqJdox1WuU2oQmxkmDVlpYgHrbqKZpHG5h+rbBkbJj6MmUoScPEYIiMxB8qNyZeNRUtQktvcKapCwZejJl6MkPFSdR0TEVwUeSFsnQkylDT544CxXBh6vbJSknhp5MGXryRS1PBB8aOUtSLgw9mTL05I1mLdHQmQucbNMrKQeGnkwZekQb3wg+ca/CXiVuxbPQLXNoX8xw9inxminfv0iaIIaeTBl6BGp4uF1NnO5abIlQtNCzNhnOqTQDkKRxMvRkytCjQPCpX9FdL3ErnoVumUPNEcO5DU68ph6qIgDNu2GiJA2ZoSdThh4NQoSihZ46wXDuF1QPQJxWW/BRHJI0IIaeTBl6NC5NAYhTZAsFJ0laKkNPpgw96gIun08fl0HD516nvAhFPAyegBTjx6O1uOdQNJqmIbWnzSQ1MfRkytCjriCgpHeL5pQXp8wQQSceir7YEg2tCURMh0BkjZKUL0NPpgw96hqCTnrKq+lB6jSQpnYoanKiTVE0mmY4DanrD1utFx/DIeXJ0JMpQ4+6ivY+EVr4Sy3Q4TZ2JhDxWgJRPLQ9go+P4ZDyY+jJlKFHXUZNTpziGjQfwyHly9CTKUOPckbtT9QmGXykfBh6OuTd7353sWLFiuK4444rzjjjjKpvf+95z3uK5z3veTM772Vl2UyLzRYMPcpd+hgOnz8m5cHQ0xFvf/vbi1NOOaXYvXt3sWfPnmLNmjXFOeecUw1t9id/8ifF8ccfX1x55ZVl986dOw090iIYfKS8GHo64thjjy0+9KEPVV1FccUVV5Q1N4SgJvfee+/Mznp5cdVVV1V9FsfQI80i6MRVYwQfgpCk6WTo6YCHH364DDg333xz1WcW/bZt21Z1zfWJT3yiHH777bcXF1xwQVl6BaQmhh7pkDT48GwwH40hTSdDTwfceuutZYB59NFHqz6z6HfeeedVXXOdf/755XDa/6xfv75405veNLOzPqr4wAc+UI0x10033VS8/OUvP1hWrVpVji9pFsGH+/wQfChc0u7pLmm6GHo6oF/o2cJNSxrQn+EXXnhh1Yed9May3/79+6s+hzz00EPFpz/96YPlwx/+cHHMMcdUQyUFvnKEHoqnu6TpYujpgH6nt7Zv3151zUV/hu/bt6/qUxR33XVX2a/NaS5Pb0m9EXTS0108wkLS5DP0dASnqf7xH/+x6irKBsr9Agz9GZ42ZI7an//+7/+u+vRm6JH649RW+kywdes83SVNOkNPR3BqikvWaXtzxx13FC972cvmXLL+la98pTj11FOL+++/v+pTlMO5tJ1L1a+99tri5JNPLk4//fRqaH+GHqmd9EaG1PoQfuLhpW0QlBjX02TS+Bl6OoQbDZ5wwgllGKnfnPCWW24pXvCCF8w5nQXGY3xe1/YePTD0SO3xZPb0Yahpiae4E2wIQ/xPMKJ/r/FpJL2Y4CRpMAw9mTL0SItH+OHZXZz26hWC6oVaIh50euKJzcMp3hhRGg1DT6YMPdLSEVQ4/bVhw2ywIQzxRHf69XpgKv25QqwenKgdkjRchp5MGXqkbqD2KNoMeVNEabgMPZky9EjdEfcGWrHC01zSMBl6MmXokbqF02MEH06VSRoOQ0+mDD1St3BJe7Tv8fJ2aTgMPZky9EjdQyNoQg9Xc0kaPENPpgw9UvfQnicubScASRosQ0+mDD1SN3FJO6GHuz9zZZekwTH0ZMrQI3VXPPOLuzdLGhxDT6YMPVJ3cZor7t3DHaDrGM4jLHjkRVouu2y2fxQbREtzGXoyZeiRuo2wE6e5CDOEGmp+6KZ/2+IND6VDDD2ZMvRI3Rf37qmXeJ4X9/ShwXMUTovRnxKPuKBfzqgV42o4AiOP+khrxLwRZH4MPZky9EjdR0NmAszatbOhhkbObRs3x31/uMtzzqJheL9CGDIA5cHQkylDjzT94vL3nNv2EBZZB9R4ccqQbkJkvRaN/pp+hp5MGXqk6RdXgfFsr1wRcFgHvdo2pTVB3iJg+hl6MmXokaYfB3oO5tRq5IrTewsFmgiHnObSdDP0ZMrQI00/2qlELUaObVYIOiw7Db/7Yby4RQA1P5pehp5MGXqkPETblRwvXV9MTVe0/fGGkNPN0JMpQ4+Uh7Qhb25i2ds0UqYmLBp+N90QUtPB0JMpQ4+Uh5wvXV9sLVfcEJJ15SXs08nQkylDj5SPaK+S29VJcffqxSx3BKVBXMLO+3ITRI6x8agQGktzCo1gxd8o69cfGoeSU9sigjnraevW2WVnXcQ6GtRnEQw9mTL0SPnI8dL1to2Y6+IS9jZPuY/Hg2zceCi8xIF6EIXpDhs1WiwHd62O9407WLNsg7x7NeEmAiDhL33PfsXQszgzq0x1hh4pH3HahnvW5GIpl+tHSOzVDoog0Cbc0EaI96dw4KYwXwQrAhV/o8SNEyk8XiSmQfgYxqk2jvsEj3R+FyosM8u+GPVA1VR4ZArriPXN8rMuYh0NmqEnU4YeKR8cNOMAkwsOniwvfxeLg23TJez1sEOoYfrUoEV4GdTdr5lWzAPvudTpxikkao/qD60lDKeNtxmXcMayNd29eqHww/bGqap6MCTcMD2my/QHta4Ww9CTKUOPlJd4AGnbRr2TLg7Uh7u8EZriNE8aFAg7aUgYFsJDfG6Uhd6T8eP0UbSL6fVUfqZLWFtMbQrvH1e4Uerhh/cf17pqy9CTKUOPlJc4ZcLfHMSB93BPkXAATw/wFIJUWvMzKnG6jUKYCSwboYN+bU8hEeaWWsPSFH7qNUjjWlcLMfRkytAj5YUDEAcjDo7TjjDAsi62EXMdtURdOYATNNLTXfVTR1Ei2MQpt8MNfW3Uw0+8fxfDTjD0ZMrQI+UnDprDPBB2QRpWlooan66ghiYNGXyetJGJgDMuhB9qo7ocdoKhJ1OGHik/HCA5WHapjcUwRHsc/k4bQhif3zgaAU8DQ0+mDD1SfjhYEgYIP9OMGh6WM5dG22rP0JMpQ4+Un2jrQoPTabbURsyaXoaeTBl6pDxFm5BJaH9xOAbViFnTydCTKUOPlKe4dH0a27tgkI2YNX0MPZky9Eh5ilAwrZeuT3MjZi2doSdThh4pX4QCSpcuxx4UGzGrH0NPh9x+++3FJZdcUlx00UXF9ddfX/VtZ+fOncUuHqzSkqFHytc0X7puI2b1Y+jpiB0zP0sIIevXry/OOeecmS/tsmL79u3V0P54LeNT2jL0SPniZnbsLnjK9jTV9tiIWQsx9HTEmjVris08qa3C/6tWraq6ejsws8c6auanzdqZn26GHkltEA7SRxpMy43ubMSshRh6OuDJJ58sAws1NmHvzF6JfldffXXVp9lZZ51VbNq0qSyGHkltEXTSJ3gnv7kmlo2YtRBDTwfcfffdZWDZs2dP1WcW/Wjf0wvteFay15qxUOjZv39/8aUvfelg2bZtW3HsscdWQyXlKi5hp3BF11LbwhCmaF7YqwyzVslGzFqIoacDCC9NgYV+6SmvFKe1VqxYUb4WC4WeL3zhC8Xzn//8g+Wkk04qT4tJEruRuGkhu4WtW6sBLRBiGJ/2QdGIeKEyrFBiI2YtxNDTAf1qej7ykY9UXXNtmPl5Rgme3pK0FDRo5knZ7EYo1PqsWTMbZvjtVS+9Qg7hiRqXphKn02hHNGg2YlYbhp4OiDY911xzTdWnKPbt21f2u/baa6s+c62e2YMwvKlE7U8/hh5JTaiFiUbObQohh7DE5e9taliiRmnQ7W5sxKw2DD0dcfrpp8/8epr5+VSpX7316KOPFp/5zGeK733ve1WfuazpkTQo1Prw24lCmCCg1EvbkFPHNNlVUUs0yMvlmadhhClNF0NPR1x11VXFcccdV5xxxhnFmWeeWQaY9D49N9xwQ9nvnnvuqfrMZeiRNCni5ojUEA0KNTxM00bM6sfQ0yFf+9rXigsuuKA4//zzi+uuu67qO+uBBx4oLrvssuKJJ56o+szFJe5tTmsFQ4+kcYn2N5RBXc1lI2a1YejJlKFH0jjF6SgaSy9VhCgbMWshhp5MGXokjRPteaLB9FKfAWYjZrVl6MmUoUfSuBF2CCtcwr7YRs3c6LB+6byNmLUQQ0+mDD2SuiAaIC8UWGiyuHHj7Okwxq8XLoUf5t2eNR0MPZky9EjqgvQS9nojZLqpzaEmqB5yuNEh92fl1JaNl9WWoSdThh5JXRF3guZUFTgm8X8acqjJIeQQkgZ5fx/lxdCTKUOPpK6gpiYaNddrdQhE3ntHg2LoyZShR1KXxCXsFGp1tmyxRkeDZ+jJlKFHUtcQdBZxj1Vp0Qw9mTL0SJJyY+jJlKFHkpQbQ0+mDD2SpNwYejJl6JEk5cbQkylDjyQpN4aeTBl6JEm5MfRkytAjScqNoSdThh5JUm4MPZky9EiScmPoyZShR5KUG0NPpgw9kqTcGHoy9bnPfa74sR/7seLnfu7nWpcTTjih+Omf/unGYZZDhXXkelq4uJ7aFdbRz/zMzzQOs8yWn/3Zn3VbalH8zv1cccQRRxTvfe97qyPhdDL0NNi/f3/x0Y9+tLjllltal3/4h38oTj755MZhlkPlT//0T4t169Y1DrMcKr/xG79R/NVf/VXjMMuh8uxnP7vYtm1b4zDLbLn66quLpz/96Y3DLIfKBz7wgWLVqlWNw3Ip//Iv/1Ls3bu3OhJOJ0PPgOzatas49dRTqy71wo7lj//4j6su9fKGN7yhuPjii6su9UIN69e+9rWqS02+9a1vFc94xjOqLvVy5ZVXlj82NN0MPQNi6GnH0NOOoacdQ8/CDD3tGHryYOgZEENPO4aedgw97Rh6FmboacfQkwdDz4AQev75n/+56lIvrCPX08JcT+2wjgw9/RF63JYWRuhxPU0/Q48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9AzA9u3bi3POOac4/fTTi7/8y7+s+k4PblbFrck3b95cliYHDhwo/uIv/qJYs2ZNsWHDhuKOO+6ohhzyT//0T8X69euLN77xjY3T4TW8tt80eB2v7zWNcfriF79YvOtd7yrn/7zzzivuuuuuasghX/nKV4qzzz67eM1rXlO8+93vrvrOxXL99m//dnHmmWeW21bdl7/85fImj/22N6bBTSDZLpumMS4PPPBA8f73v79cR/3WU3ynWIZenzPrj/XI+mS91jGNM844o3j961/fOI3/+Z//Ofh5ve1tb2vc3rrgtttuK+d/586dVZ9D6N/v+5B+p/h+8j2ti2nw3eQ7Wsc0WD9Mg/XFeusK5r2p1PE94fvS6/sQ21u/71TbafTbZjV+hp4luuaaa4ply5aVG/lVV11VfinY0U6T1atXFyeeeGKxcuXKclnrHn300eK0004rl5sdM+vilFNOKfbt21eNMbtz4rUXXXRRccUVVxTHHXfcnB0D0+A1S5nGODHPK1asKD74wQ+WV/IxX0cffXRx4403VmMUxWc+85niyCOPLIexrbCs9W2Fgw/LvWPHjuL8888vl5dxA9vbM5/5zIPTaNrefv/3f7/vNMaJ9cQybtmypZy/d7zjHfO2KeaVfsw747AsLFMq1h3jsi5Yr6zfENOIbYVppNsKB3/uvss00u3t/vvvr8boDr53y5cvLzZt2lT1mcU88x1g+VhOljddRr47sdwsI8vK95TvWjicabDemsLTODBfhDH+piXFPPM9iW2FZeR7FJq2N16TOpxp1LdZdYOhZ4n+6I/+qPiDP/iDqqsobr755nLj50swbdjpsWx17DCPOeaYqmtW7CgD3dyjJ7BzYGcboaZpGjzWoz4NXhfq0xgn1k3dq171qvIXcvizP/uz4rWvfW3VNYtljm2FWhDWbxqUWGfsbAPbWzoNajjS7Y3Lk+vTYPtMp9ElDz/8cHHssceWB9zAvFLrEFgWlollA8ta31ZYJ6zfwDTS7Y1psK1EqOHA1DSN+gFz3AiHa9euLX94pKGHbZ7lSb8PLC/fkcCy8B1Kscx81xDTuOCCC8puNE2jaZtl/XUB80fo6SXCCPvlwPeB71FgW2nah6fbW30arJN+06hvs+oOQ88S/dIv/dK8A15UA0+bXqEnThGk2Bm96EUvKv9/7LHHytfV1xP9uDcG2k6jLp1G13B6Kj14s62wTCmWOQ7Wn/rUp+YtY6zz+HXeNI1XvOIVB7e3ftN45JFHqj7dwrxdfvnl5f/MI93pr2jQj2UD66tpW2HdgHXF+E3bWzxButf29uIXv7jqGj9OK1PDSq1KPfSwzbM8qfic+a6A707T9hbbCtPgmVwEz1CfBuuj3zTGjXljfpjvPXv2VH0PafqcGTe2ldje6tsKr4ntrde2stA06BfTUHfMP4poUfiV2vSF4Y6604bl5ItcR9Vv+qsHl156afGc5zyn/J82Abyu/quHfh/60IfK/5umwY6lPo26dBpdwrriicVpDQbbSvzKDixzVIPzi7v+yzxqbqK9SdM03vKWtxzc3pqmEZ9bl27ix2f7zne+s3yy81lnnXXwIYfMI/Naf+ghyxQ1Eqyv+rbCOmHdoNf2xjQuvPDC8v+m7Y1pPPe5z626xo+gw/co/k9DD9s8y5iqbyt8d+qBhWWOUzdM4+d//ufL/0N9GqyPpm02pjFuLB+nldnnMt+cerv22murobN3Nm8KPbGtxPbWtA+P7a1pGun21msa6Tar7jD0LNHTnva0xi/Mq1/96qpresTBs47TODSSTNG+gl+RuOmmm8rXfe973yu7A/1o1IqmabBDq0+jLp1GV/DLnLYTNGpMsa2kO2SwzLGtsBz1mgbWGctI42U0TeM973lP32nE5xbT6AI+WxrYvvCFLyz+8A//8GDIYR6Z1+9+97tld2CZ4nNmWevbCuuEdYOYRn17S6fRtL0xjdjexo3TWgSdUA89LAfLmIpthe8KWJZ66GGZWXYwjZe85CXl/6FpGk3bbExj3KgRfOKJJ8r/d+/eXe57CT6BbaUp9NS3laZ9eLq91afRtL3Vp5Fub+oOQ88SnXTSSY1fmLe+9a1V1/SIg2cdV89whUyKX5G/8Au/UP5/3333la+7/fbby+5Av/gV2TQNdtj1adSl0+gKfgU3HRTYVtI2GGCZY1v56Ec/WjZ+TrHOWMZ777237G6aBjUf/aYRn1tMo2t+8Rd/8eDBgXlkXr/61a+W3YFlYtnAsta3FdYJ6wYxjfr2xjT6bW9MI7a3cSIAHnXUUcVll11WNoqnEKI5Vcr/YDlYxlRsK3xXwLLUQw/LzLKDadRrtpqm0bTNxjS6Jrb1O++8s+xmW2kKPfVtpWkfnm5v9Wk0bW/1aaTbrLrD0LNEfBnqVZhUe9Z3NtMgdih1LGva+BEc+NPGs7wuvcyTX2X0+/znP192t50Grwv1aXQB89zrs2db4VRU6nnPe97B8WP9pqd2OL1BY9Onnnqq7GYab3/728v/wa9cqvfr04iDFtg+02l0DQdzLvMF88i8nnvuuWU3IvDGQYVlZb2lWK9xYIppxKkhsE7TbaXXNNLtbVy4RJ2anbRw9Rbte/gfLAfLk34f+H7RL7As9dNQfMdiW4lppAfrpmn022a7JvYJcXqO+YzTUIHvQ31badqHp9tbfRp8B/tNo77NqjsMPUtEGwF2JPEl+5u/+ZvyC9CFK4oGLQ6odeyk6R+hJsaLK4rAgS3dAdPINz3ADGIa48a89WvLVd9WWNb6tkIbi/SAUg9Rbba3+jR4+n/aPU7MR8w7+Nzr88v/zHOgO217whVYxx9//MFthemxTqK9DnjN7/3e71Vds939trevf/3r5TTS7a1LCDv1S9ZZHr4DgW0lbTgfVx3FgZdlpZtlD22mwSlI1g+apjFO8fmBBuxvetObyh8BcWozrlDje4Je20q/7W0Q01B3GHoGgB3Fs571rHKjT3ek04KdLctVL6kPf/jDZT9+/cR9ZFJUAf/O7/xOccIJJ5TVwumBO/CaftOInU2/aYxLhLR6iV+DgStBOHVBuwOG17eV66+/vtxZcjqDHS3bVh3T6Le9xTRYPzRmbZrGuMRnzLwxj/zfNH/0Y95jPJYpFQdf1iPrs+lqIqbBATDWRX1bSbfZuH9SVzWFnvg+8F3gO9G0jCwT3yWWkWVlmVP1afAd5buaYhqsn17TGCfmh1qYmDe2h/oVU7Gt8H3he7PQ9sa49e/UYqfRtM2qGww9A8LlktzHIb3x17Tg1AAH9Xqp49JX+tevmkmxk6XdQL2RaeC1/abB63h9v2mMQ7pe6qWO9dlvW+GUFfffaboENwxiGuPCPPVbP4HxWIZoqFrHsrMO0tOBdQttb2222S6gZqVpOdt8H+I7lV6anopp1ANTaqFpjEu6LfXbDmJb6fd9YFi/71TbafTbZjV+hh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPdKE4r4kPIupfn8SHngaz2gaFu6LUr/x4jjxLKQ3v/nN5Twxb5LUxNAjTai4U3Y9fHDQp/8wjeI92vr+979fzst5551XztdiQ0/T3Y4lTSdDjzShOFCvXLmyfBhleqDPLfQwLzyKgIdALjbwwNAj5cPQI00oDtRxwOZZXaEeSJoO6mm/GP/KK688+EywX/3VXy2H/fVf/3X5HCGeA/b+97+/7Id4Dc8kInjx/+tf//p5j3PgPXg2FsP5u2PHjmrIofnfsGFDOZz/m3C6jodgMg6Fmq144GU8yysd1oQnrsd8UHgmF5iH9PWUwGsYj368duvWrdWQQ8vPQyh5bhXPY6ov/5YtW+a851lnnVUNkTQuhh5pQkVoIBRQ28NBGocbeggQ/M8zmF75yleW3Zwyos3Q5z73ueKII44odu/ePec1v/zLv1w+WJFuAsdv/dZvlcPB9AlE0eYoXhPdETjq81ZHWGA6LGd0R2hBvHc/vE8auNKnhDetH8ZlncZ4/KU7phHLwvvyf8xDPIiS8Rmevk/6v6TxMPRIE4oDddSO8H8EgTggh7ahh4cphne+853F0UcfPechljxh+qKLLir/j9d8/OMfL7tx7bXXlv2uuuqqspv/06CB9H35e+KJJ5b/91OfDuGHfswDInD00zQvodf6oaYmxThr164t/4/l/9jHPlZ2Y9u2bWW//fv3Hww9MY+SusHQI00oDsIcnEEQIEBQ2xMH5NDroB796uODWp56kKCb/ojXcIBPnXzyyQdrhxjeVOJ90/nvpWneQM1POv8LhR7WCzU1nG5at25d+ZrQtH4Ytz7flJjfpuXnKdz0i9qwOG1HGOX0nDU90vgZeqQJVQ8N0QaFGg0OtoHaifpBfVCh59Zbby27EQf9yy+/vOzm/34H+vr8N4nwFKfEQv1U00KhJzBuhJGYt6bQQ6iK04VNmpafmjL6ffOb36z6zGJcPgMCV305JI2WoUeaUE2hgdoeAgAH38B4aSiIIDGI0PO3f/u3ZTe4V84znvGM4qtf/WrZTXAgYPTSJvSAZUpDSbx3BAi624aeELVioI1QfT7p12+aTctPDdcpp5xSdc3H+LxO0vgYeqQJ1RQaOJBzcKWECDmcYiG0cMBPg0QcwFNtQw8Nd+lPoTuGg5oUamQ4nUR/CleZxYG/beiJmquNGzeW06DGpB6CFgoo1IDFPDA/zFeEJtZZnPZiODhdSGhjftPX1dcZ89+0/AxPX8twpidpvAw90oTiwNp0CoYDcxoKQACJ/hzseR2vB3/jYB3iYJ1K+6WvueKKK8r/achcR3jgveK908bEvea/CfMc04j5Dk3zX8f7xutpoBxXggXWT4yTSl/H//E63pOQc9dddxUXX3xxce655xaf/exny2FgvPS1LGf9PSWNnqFHkhYpQo+kyeK3VpIWydAjTSa/tZK0SIQeiqTJYuiRJElZMPRIkqQsGHokSVIWDD2SJCkLhh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPZIkKQuGHkmSlAVDjyRJyoKhR5IkZcHQI0mSsmDokSRJWTD0SJKkLBh6JElSFgw9kiQpC4YeSZKUBUOPJEnKgqFHkiRlwdAjSZKyYOiRJElZMPRIkqQsGHokSVIWDD2SJCkDRfH/A6MbJSagde/sAAAAAElFTkSuQmCC">
La loss represente des "sauts" à cause de la reprise de l'entrainement à deux reprises. Cela induit une modification du learning rate et explique la forme de la courbe.
## Résultats
Les questions générées sont évaluées sur les métrique BLEU et ROUGE. Ce sont des métriques approximative pour la génération de texte.
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAApcAAAGFCAYAAAComticAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAF4vSURBVHhe7d0JmBTVvffxvO/z3PfebGq84d7EJSEBlwQl4hIWNe67EEVJVNAQNSioUaLBNQIuERQxgqgoiiBCQAERQRYVFFllEUQEhn3f91XU/9u/U1VQ01M93TA91U3398NzHqZOVVdXdfdM/fqcqlPfMQAAACBLCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXKKgjRo1yvr37+9PZUebNm1cyWf9+vU7KLazIqZPn25dunSp9P3M9euoz7CeX/8f7AppXwCkRrhEbL7zne+kLJV18P7Tn/5ktWrV8qf2T6pQcfbZZ7uSr5o1a2ZVqlRJu52aF34Pvve977nX6kD3OXl94RJep6ZTrUvLaX46kydPdsvVqFHDrStqm7Mlefsri54j6nkUxLQNhRIuC2VfAKRGuERsdFAJgkBUqQwVCZepQkVlbm9FrV271m33M88849ekpvci/H7cd9991qRJE/vhD39od911l7+UJ1i2PMnrSy6B4HMQRctpfjqtWrWyqlWr+lOVK9XnINtS7XuhhUvtJ+ESKGyES8SmvFBxoDZt2mQzZsxwRT8nq4xwmcqOHTts5syZKbcl2bRp02zhwoX+VGbSPcf+BJEgDCZTXXJwS7VsWCbLSHmfg0zDZXnPle4zkUzvQ3nCn4ONGzfahAkTbP369W46it7T8ePH28qVK/2a1Hbt2rX3vco0XC5btsydEqDPQiqrVq2ySZMm2fz58/2a7ArWP2fOHNu5c6dfm1omn0cAhYNwidiUFyoCderUsYsuusif2keBSo9/8803/RpzrXOqC5fkFrvkcBkcqJMlH9jD6wxKcICMCjavvPKKHX300aWWT94WPYce17t3b6tevfre5e644w5/ifLp+ct7Dq07PE8lCEVRovZDWrRo4bqbw1ItG5bJMqLtSrVcqoAVCN6/5BLI5DMRbGf4fSjvdQrmN2rUqNR6dV5rmF43nY4QXqZp06ZlvkCoPljff/3Xf+2dDj8uKBLss/4Pb4O2fejQoW6ZgELfNddcs3cZlQsvvLDUNmjfw/PDJRO33nprqcdUq1bN3nvvPX+uR/XaJ5XDDz/cTWv7w/sCoHBl9tcEyAIdVNKFjyeeeMItt3nzZr/Gc9NNN9nPf/5zf8ps8ODBbjkdbEtKSlwJDryaFzjQcBksp/rgoBgcEINwEojalptvvrnMtmhdepwO/joY64Cv8KHlkoNKMoWGdM8RbKPqkrc5SvJ+qDWuV69e7vEtW7b0az3Jy0bJZBnR+lMtl/w+JAv2KXiu8D5m+pkIHlu7dm3r27evrVmzptzXKVjnX/7yl1LrPeKII2z58uX+UuZOKRg4cKB7X8Ov5d///nd/CU+wvhtvvNE+/fTTvfsQ7HswHWyT/lf9sccea+3atXMtot27d3ety8mvY/369V2dLnTSfulzVbduXbvgggv8Jfa9huFyxRVXuMemE2yj/tc+qhVXX0SSXwsto/VdffXVNm7cOPflMHiuYB8BFC7CJWKjg0qqEhxsli5d6qZ18Ar7j//4D7v33nv9Ke8gquVmzZrl15j7WXXhg+SBhkuJ2g4JwkkgaltEB//wtgTP8dZbb/k15gKA6u68806/JtpVV12V0XPsz8Fb+6Blw0V1UY9N3ucoUesLSnidmk61rqj3IUrU9mT6mQi2M9NRBLRs8mkC6kpXfdTnI0zzTzvtNH/Ko8fVrFnTn9on1b4H7+kll1zi13j0BUD1QagbOXKkm37sscfcdGDQoEGuXv9H0fm1J598ctpufHXhK0Qmb/urr77q1h9+LTSt31l9vsP25/MJ4OBFuERsdFDRgV0HlqgSUDfeMccc40+ZdevWzT02fG7cL3/5SzvllFP8qX1Up3mBOMJlqm259NJLS22L1hWeDiSvL0qmzxHsX/j1TCV43uD1f+edd6x169Z2zjnn2O9+9zvbsmWLv2Rm25i8vuQS0PalWleqgJUsantSvUbJnwk9TlfGZ0rbo9c52WGHHeZaj8PUUnj77bfbxRdfvHcbv//97/tzPVrf9ddf70/tk2rf9dqpXq2VYUF98Nrq90QXY2k9yeV//ud/ypweIKr7wQ9+YFOnTvVrvJCaXGTevHnu+W677TY3HQi+IN1www1+jbeP5557rj+1T/I2AyhMhEvERgeV5EAQpUePHm7ZESNGuGl16SWHBs3XQTOZ6jQvEEe4zHRbNB21/8nri5Lpc+zPwTvV8959991l1pHJNmayjGjdqZbTVeDh/Ukl6rn0uExeo0y3M5BqvcnrUTf3z372M/f6KQjq9QtOewjLdDsDqd7T5Ho9XudhBtuVXJKfs0+fPu7xyedtdujQwdUH5cwzz3T1qbZDgucIaLmofSxvHQAKR/q/4kCW6KASPgCVR11qOsdt7ty57nEvvfSSP8ej87xStVKFL0ZJFS6Tu+uCc/PCNB11gEw+kKbaFnXFhrdF64ra/+T1Rcn0Ofbn4J3qeXWhS/K+Z7KNmSwjQQCKcuWVV7r56UQ9V6rXKPkzkel2BvRaaLuSqeWyefPm7uclS5a45Tp37uymA7pYS/Vhya9tQHXJy0qq9zS5/o033nDTyWExis751bIvvPCCX7PPnj17yhRZvHixe4zCc5iuXld98FqIpqP2MdW+ACgshEvERgeVTA/q6mLTwfvxxx93jwt30YoGClf9xIkT/RpzP6tO8wKpwqW6EMMUPlQfpmldGZssOZxEbYvo/LTwtlQkXGb6HPtz8E71vLp4SusIX2SUyTZmsow0aNDArT98AYjoQhh1IUd1QSeLeq6o1yjqM5Hpdgb0+F/84hf+lCc457Jr165uWkP+aFpjhYb99re/dfVhmo4KXkGLvYb3CUv1nibXBwPLpxp9IFivhknS71byWKaZUFDXhUVhzz33nHve4LUQTRMugeJFuERsdFDRQV0HnagSNmXKFLf8SSed5M7BTBa+Q4taX1SCgKh5geRwKQpkWk7PqQOi1q+fVRcWXI2r8+o0PzggJoeTqG05/fTTy2yL1hEVapLXl0omz7E/B+/gebVdKmqR0rTO29Nrtn37dn/Jsssml0yXCWgbdWFI+/bt3bbqf02rXlcWpxM8V1imn4mox5ZHj9f2a4gsXQGuUq9evTKtpLroSut9+eWX3edKF+Bce+217vFhwfqS6cprzdOV2+HXLNV7GlX/1FNPuToFTHXN66IlvbbarmC5448/3m1b8Bzhko72S+tXS7++fATPl/xaqC5qfan2BUBhIVwiNsFBPapEHYhUf9ZZZ9nbb7/t15Q2ZswYF7B+/OMfu6KfVRemcKmLU5Lp4HjIIYe459BBUs+vn8PUvah6dT1rXnBA1M/Jy3755Zeu6/R///d/U25L1HNI1PpS0XaX9xzaRq0rk4N38LzhovMGg2FswqKWDUrw3kXNC5dk2hddaKOwof8VzjTkUiZSrTOTz0Sqx6aiZbWPKhq+SBfAKFwmh2B9joLTK4LHBO9HWDAvigKhxsvUMsHjUr2nqeq1HQp7P/rRj/beBlQtqsFywbqjSiY0pqu+fOiiKI27GvW+aV1R+5hqmwEUFsIlAAAAsoZwCQAAgKwhXAIAACBrch4uNa6dTmDX1Ys6VykTOtfpjDPOcBce6MrFVOcvAQAAIF45D5fBid86wTvTcKnbqenEeQ1loissFTLDw2AAAAAgN/KmWzzTcDl9+nS3XPhWgBp2I3koDAAAAMTvoAuXGmbju9/9rj/lCR6bPCgzAAAA4nXQhcuOHTu6wZbDgseGB0oOjB071urWrbu3nHDCCS6chusoFAqFQinkonF9dTclIA4FFS6nTp3q1+yzevVqGzJkyN7y7LPPul+ycB2FQqFQKIVc1LDy2muv+UdGoHIddOGyvG7xlStX+jWp6U4qusMJAADFQncqS3W3MyDbDrpwqXCo5cK3XtPV5ple0EO4BAAUG8Il4pTzcKlQGRSFxuDnwMSJE61GjRqlLtapU6eOG4po4cKF7pxK3es306GICJcAgGJDuEScch4u1eqosS6TS2DSpEl24okn2ooVK/wabxD1evXquVBZrVq1/RpEnXAJACg2hEvEKW+6xeNCuAQAFJtch8vt27dTKqns3r3bf5XzB+ESAIACl4tw+c0337hT2ubMmWOzZs2iVGLRaYLr16/3X/ncI1wCAFDgchEuN2zY4I65Cj1fffWV7dmzh1IJZdu2be7UwZKSEv+Vzz3CJQAABS4X4XLx4sXcOS8mu3btci2Y6ibPB4RLAAAKXC7C5dy5c/Oqq7bQ6fQDtRbnA8IlAAAFjnBZ+AiXOUS4BAAUG8JlesnjbEfR8Ihr1qzxp/IL4TKHCJcAgGJDuExPwTI8znaUY4891vr06eNPVa5BgwbZAw88YJdddlna7RLCZQ4RLgEAxYZwmV6+hUvdIKZVq1ZuuzK5PbbC5caNG/2p3CJcAgBQ4AiX6QXhUncBfPnll+3JJ5+0YcOG+XM9UeFy+PDh1qlTJ+vSpYtNnjzZr/UoIIa72vXz/txVUAiXBwHCJQCg2BAu0wvCZY0aNdz/Z511lgt16p4OJIfLxo0bW9WqVa1p06bWqFEjt7yCZkDTyeEyk6AYRrg8CBAuAQDFJl/C5ZVdxsZarn5hnP/M6SnEKViGw2GHDh3s9NNP96dKh0u1VCaHPrVKHnHEEf4U4bJoEC4BAMUmX8LlqY+NtJ/f+25sZX/DpUJceJsnTJjg6ubPn++mw+FSLZWtW7feWxQsVcJBUD8TLosA4RIAUGzyJVzOWLbJpizeEFuZs2qL/8zpKcRVr17dn/Jo+xXsgnMpw+Gyfv361rZt270lCJcqAcJlkSBcAgCKDedcpheEuKFDh/o1Zr1793Z1wRA/4XB577332kknneR+TkXnY2odgSeeeIJwWYgIlwCAYkO4TE8hThfynHnmmfbhhx/undZwQIFwuFy6dKkLfc8884xNmzbN1UnXrl39n8yuv/5615Kpe6xrfZdffnnG4VLLB0WPCU9HIVzmEOESAFBsCJfpKbQpTCoMVqtWzQ455BB3XmWYwmX46vFly5ZZw4YNXa5QAFRp1qyZP9dcd7qmVV+3bt29QTET2o5gneFCuMxDhEsAQLEhXBY+wmUOES4BAMWGcFn4CJc5RLgEABQbwmX+qVmzpv3yl7+MLNu2bfOXyhzhMocIlwCAYkO4zD+7du2ynTt3RpYDQbjMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTLwke4zCHCJQCg2BAuCx/hMocIlwCAYkO4TK+8WysGdFtI3ZUnHxEuc4hwCQAoNoTL9ILbP5YnfG/xyqZbTwa3fNT9zjt27OjPiUa4zCHCJQCg2BAu08u3cHnGGWfYwIEDbfPmzTZ69Gj7/e9/X+Ze52GEyxwiXAIAig3hMr0gXC5evNhatWrlgtzgwYP9uZ6ocDlkyBC7+eab3fK9evXyaz1t2rQp1dWun1V3ILp06WKHHXaYP1UW4TKHCJcAgGJDuEwvCJePPfaYC2kPP/yw65J+5513/CXKhsvGjRtb1apVrWnTpnu7sTt37uzPTYSsxHRyuFTdgXjqqaesXr16/lRZhMscIlwCAIpNLsJlSUlJ2XD53G/jLa9c5D9xegp+NWrUKNX62LVrVzv99NP9qdLhUi2JyUFRrZJHH320P5W9cLl27Vr3uPK65AmXOUS4BAAUm7wJl08dY9b6kPjKfobL//f//p+753fg888/d6Fu/vz5bjocLtVS2bp1671FwVIlHB6zFS71mPLOtxTCZQ4RLgEAxSZvwuW6ErM1s+MrGxf7T5yegl/16tX9KY+2X8Fu8uTJbjocLuvXr29t27bdW4JwqRLIRrhUa2q6YCkKlxs2bPCncotwCQBAgcubcJnHguA3dOhQv8asd+/eri4IbeFwee+997rpFStWuOkoOh9T6wg88sgj+xUua9eunVGwFMJlDhEuAQDFhnCZnsKlLujRmJIaLD2Y1pXjgXC4XLp0qQuKLVq0cMtqyCBdXd6kSRM3X66//nrXkrl8+XLr3r27HXPMMRmHywYNGrhgqXWHSyqEyxwiXAIAig3hMj0FN4VJhcFq1arZIYccUqbVUF3U4ZZN3a2nYcOGLlcoNKo0a9bMn2uuO13Tqq9bt+7e58iElosqqRAuc4hwCQAoNrkIlwfbUEQHO8JlDhEuAQDFhnBZ+AiXOUS4BAAUG8Jl/qlZs6b9+Mc/jizbtm3zl8oc4TKHCJcAgGJDuCx8hMscIlwCAIoN4bLwES5ziHAJACg2hMvCR7jMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTL9NLdXlEGDRpkCxcu9KfyC+EyhwiXAIBiQ7hML5NbM4bvLV7ZtC3BLSX1vNdee62NGzfOn1uWwuXGjRv9qdzKi3B5zz33WNWqVe2II44ocx/PKLrv50UXXeTu+6n7fN50003+nPQIlwCAYkO4TC/fwmXXrl39n/ZtW3kZiXAZ0qJFCxcQdXN33eReL174pu9RqlSp4gLm8uXLbfDgwS7VazoThEsAQLEhXKYXBLh+/frZ9ddfb5deemmZbBEVLrVMw4YNXfDTY8M0L9zVrp8zzSvJ9FjlnVQIlyEKip07d/anzHr16uVePIXNKFEvrt6o8l7wMMIlAKDYEC7TC8Jl0GDVunVr93M4oySHS80/6aST7JlnnrFWrVrtfWxA08nhUnX7a968eXbNNdfY6aef7teURbj0rVmzxr3IEyZM8Gs8quvdu7c/VVbt2rWtY8eO7me9mGr5vOOOO9x0OoRLAECxyZdweW6/c2MtN7x3g//M6QXBr0ePHn6N2QsvvGBHH3207dy5002Hw2VUw9YDDzxQqk4/VyRcBs+hUwe7d+/u10YjXPqmTJniXrTkD5/qOnTo4E+VtWDBAmvcuLGdcMIJ9v3vf9/at2/vzynrk08+sdNOO21vOfHEE+2www7z5wIAUPjyJVye3fdsO+G1E2Ir+xsu/+M//mNvkJRp06a5TKLgJuFwqQtsFP6Sy3/+53+6+aLHViRcBtQIp3XXrVvXrymLcOkL3rSocNmpUyd/qjS9cGeccYY7V3PgwIH29NNPu2ZsvehR1q5da8OHD99bunXr5rriAQAoFvkSLtfuWGurt6+OrWzYmfnQPAp+1atX96c82n5lkuBUvXC4rF+/vrVt23ZvUTd6UALZCpcSPHbp0qV+TWmES59eBL1QUd3iAwYM8KdKU6BUOFSXeuC+++7L+M2iWxwAUGw45zK9ILwNHTrUrzF3ip7qgvEjw+Hy3nvvddMrVqxw01GSu7Pvv//+jPNKMl0spMfOnj3brymNcBmi8yXVmhjQ1d+HHnqoLV682K8p7ZVXXnFDFoWp1bK8FzyMcAkAKDaEy/QULtUTeuaZZ9qHH364d1oX6gTC4VItiMoe6knVsps3b3YZpkmTJm6+6KpzZRSNbqOsU6tWrYzDpcKr1quidfzyl79025MK4TKkZcuWLmBqYNCZM2davXr1rHnz5v5c7zyD4447zr0xwbTemOeff961Xo4ePdouv/zycl/wMMIlAKDYEC7TC8Kkgly1atXcWNrJ40oqo4wcOdKfMlu2bJkbhki54n/+53/snHPOsb/+9a/+XHPd6RpeUblF50sGz5GJW265xU499VT3WOUgrae8RjTCZRJ1ax911FHuzUl+Iz/99FOX9MPNzq+99ppddtll7o3XByDdCx5GuAQAFBvCZeEjXOYQ4RIAUGwIl/ln27Zt5Zb9RbjMIcIlAKDYEC7zT82aNd0YmlGFcHmQIVwCAIoN4bLwES5ziHAJACg2hMvCR7jMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTLwke4zCHCJQCg2BAu0wtus1ieXr16uf3KR4TLHCJcAgCKDeEyvUxuzRi+t3icdOtI3QZS25gK4TKHCJcAgGJDuEwvX8Pl008/bWeeeSbhMp8RLgEAxYZwmV4QLvv162fXX3+9XXrppWW6yaPCpZZp2LChNWrUyD02TPPCgVA/p+t6D5s5c6YLlR9++CHhMp8RLgEAxYZwmV4QLhXiFABbt27tfu7cubO/RNlwqfmNGze27t2720svvWTnnHNOqfCYHAj1s+oy1aBBA3vggQfcz8nrSka4zCHCJQCg2ORLuJx7+hk2u9bJsZVF1zX2nzm9IPjpop1A165d3b2+d+zY4abD4VIh8kc/+pH7OfDYY4+VCo8VCZddunSxU089de9zEy7zGOESAFBschEuFXaiwuWs446PrexvuDz88MP9KU9JSYkLdeqelnC4vO666+zee+915f7777eHHnrIHn74YTvssMPcfDnQcDlv3jz7yU9+YoMGDfJrCJd5jXAJACg2+RIuv9mxw77Zvj228u3Onf4zp6fgdsQRR/hTnoULF7pQN2PGDDcdDpf169e3Zs2a2ejRo0uVcAA80HCpVlF1saubPih6XPBzFMJlDhEuAQDFJl/CZT4Lgt/w4cP9GrOePXu6uk2bNrnpcLhUi2XVqlVt0aJFbjrKSSed5LrWAy1btsw4XCYXPS74OQrhMocIlwCAYkO4TE/hUq2CZ511lvs5mO7QoYO/ROlwuXTpUhf4WrRo4ZZVAFU3dpMmTdx8ufXWW6127druPM727du7rvRMwmUUPU7PkwrhMocIlwCAYkO4TC8IkyNHjnRXaR9yyCEuDIZdcsklrus7sGzZMjcMkXKFzpFUV/ngwYP9uZ6g1VFd6MFzHAg9jnCZpwiXAIBiQ7gsfITLHCJcAgCKTS7C5cE2zmXcFAT1+kSVb7/91l8qc4TLHCJcAgCKDeEy/9SsWdNq1KgRWbZv3+4vlTnCZQ4RLgEAxYZwWfgIlzlEuAQAFBvCZeEjXOYQ4RIAUGwIl4VP4XLDhg3+VG4RLgEAKHC5CJcrVqxwd7hB5duyZYvNmjXLdu/e7dfkFuESAIACl4twuXXrVhd4FixYYGvXrrV169ZRKqEsX77cZs+eXe6dguJGuAQAoMDlIlzKtm3bbOXKlTZv3rzIMn/+/FhK1HNno0Q9VyYlal0HWhYvXuxOP/j666/9Vz33CJcAABS4XIVLFCfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOhEsAAAoc4RJxIlwCAFDgCJeIE+ESAIACR7hEnAiXAAAUOMIl4kS4BACgwBEuESfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOhEsAAAoc4RJxIlwCAFDgCJeIE+ESAIACR7hEnAiXAAAUOMIl4kS4BACgwBEuESfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOeREu77nnHqtataodccQR1qhRI7+2fPfdd58dd9xx9p3vfMeVNm3a+HPKR7gEABQbwiXilPNw2aJFC6tRo4ZNnjzZSkpK7Oyzz7ZmzZr5c6Np/pFHHmn9+/d306NGjSJcAgCQAuESccp5uKxSpYp17tzZnzLr1auXa4lU2Iyies0fPHiwX7N/CJcAgGJDuEScchou16xZ44LihAkT/BqP6nr37u1PldavXz83f/r06a7VU93oqssU4RIAUGwIl4hTTsPllClTXFBcv369X+NRXYcOHfyp0p555hk3X+dnNm3a1Bo3bmyHH354ym7xMWPGWK1atfaWX/3qV3bYYYf5cwEAKHyES8Qpp+Fy2rRpKcNlp06d/KnSVK/5L7zwgl9jdtddd7m6DRs2+DX7rFu3zj744IO9pXv37q4rHgCAYkG4RJxyGi43btzoQmFUt/iAAQP8qdJUr/nLly/3a8zmzJnj6lKdpxlGtzgAoNgQLhGnnF/QoyvFu3Xr5k+Zu1Dn0EMPtcWLF/s1pale88MX9ARd5UuWLPFrUiNcAgCKDeESccp5uGzZsqULmOPGjbOZM2davXr1rHnz5v5cs/Hjx1u1atVs2bJlfo25+RqySEMQDR061KpXr27169f355aPcAkAKDaES8Qp5+FSNCD6UUcd5UJf8iDq6uo+7bTTbOXKlX6NR8tpeT0u0zEuhXAJACg2hEvEKS/CZZwIlwCAYkO4RJwIlwAAFDjCJeJEuAQAoMARLhEnwiUAAAWOcIk4ES4BAChwhEvEKSvhUnfG0VXdGhoo3xEuAQDFhnCJOFU4XDZr1swNYK7AFoRLjV3ZtWtX93O+IVwCAIoN4RJxqlC4fPnll+3aa6+1vn372sMPP2yjR4929WPHjnWDoecjwiUAoNgQLhGnCoVLDWTevXt39/ODDz64N1xu3brVfvCDH7if8w3hEgBQbAiXiFOFwmXDhg1d66XoLjtBuHzvvffcLRvzEeESAFBsCJeIU4XCpW67eOmll9qUKVOsVatWLlwOHDjQrrnmGjedjwiXAIBiQ7hEnCp8Qc+NN97oLug577zz7KSTTnI/V61a1Z+bfwiXAIBiQ7hEnCocLmXEiBHu6vAOHTrY4MGD/dr8RLgEABQbwiXiVKFwefbZZ7uu8YMJ4RIAUGwIl4hThcJlixYtCJcAAOQ5wiXiVKFw+cUXX1iNGjWsW7dutmDBAr82vxEuAQDFhnCJOFUoXKrVUhfwRBV1mecjwiUAoNgQLhGnCoVL3e6xvJKPCJcAgGJDuEScKhQuD0aESwBAsSFcIk4VDpczZ8503eO6W49uB6k79eRrq6UQLgEAxYZwiThVKFx+8skn7vzKY4891ho3bmzNmjWzU045xdUFt4XMN4RLAECxIVwiThUKl3feeWfkhTtqyaxZs6Y/lV8IlwCAYkO4RJwqFC4VLFN1gav1Mh8RLgEAxYZwiTjRcgkAQIEjXCJOnHMJAECBI1wiThXuu1bAbNCggWvBDEqfPn38ufmHcAkAKDaES8QpP0+MrESESwBAsSFcIk4VCpcDBw5051cmU11UfT4gXAIAig3hEnGqULi8++677amnnvKn9unXr5879zIfES4BAMWGcIk4VShcMhQRAAD5j3CJOFUoAd56663WokULf2qfjh07MhQRAAB5gnCJOFUoXE6YMMGqVKlit99+u2vBnDZtmj3wwAN21FFHMRQRAAB5gnCJOFW471qtlAqY6gYPSqNGjfy5+YdwCQAoNoRLxCkrJ0bu2LHDZs6caTNmzLBNmzb5tfmJcAkAKDaES8Qpq1fdLF261GbPnu1P5SfCJQCg2BAuEacDCpe6SrxLly7+lKdu3bp7u8WrV69uQ4YM8efkF8IlAKDYEC4RpwMKlz/84Q9tzZo1/pTZ4MGD7Re/+IV169bNdY+ff/751rRpU39ufiFcAgCKDeEScdrvcDlnzhzXOhlWv359u/nmm/0ps1deecWOO+44fyq/EC4BAMWGcIk47Xe4nD59uguXq1atctMlJSVuunfv3m5aNCxRcgDNF4RLAECxIVwiTvudAHVl+BFHHGE9evRw0506dbJatWq5nwMKl9z+EQCA/EC4RJwOqHmxTZs2rmVSF/bo/z59+vhzPJofdeeefEC4BAAUG8Il4nTAfdfDhw93IXLZsmV+zT6q79evnz+VXwiXAIBiQ7hEnPLzxMhKRLgEABQbwiXiRLgEAKDAES4RJ8IlAAAFjnCJOBEuAQAocIRLxIlwCQBAgSNcIk6ESwAAChzhEnHKi3B5zz33WNWqVd3g7I0aNfJr01u4cKEbZ3N/7gZEuAQAFBvCJeKU83CpwdZr1KhhkydPdreS1MDszZo18+eWT0E0GNA9U4RLAECxIVwiTjkPl1WqVLHOnTv7U2a9evVyYVFhszzdu3e3c845h3AJAEAahEvEKafhcs2aNS4YTpgwwa/xqK53797+VFkrV660n/zkJzZlyhTCJQAAaRAuEaechkuFQwXD9evX+zUe1XXo0MGfKuvPf/6z3Xbbbe7ndOHy448/thNPPHFvOfbYY+2www7z5wIAUPgIl4hTTsPltGnTUobLTp06+VOl6Z7lRx55pG3fvt1NpwuXWvdHH320t/Ts2dN1xQMAUCwIl4hTTsPlxo0bXTCM6hYfMGCAP1WaLv5RoAwXLa//R40a5S+VGt3iAIBiQ7hEnHIaLkVhsVu3bv6U2eDBg+3QQw+1xYsX+zWlJQdLFcIlgGQ7vvra5q/ZZnNXb/VrgOzatOMrW75xh81ZtcWmLt5on5SstaGfr7Q3Jy+17mMXWucPSuxf78+15z4ssRdGz7OXP55vr36y0HqOW2RvTFxsfT9dYv2nLLNBny23d2essGEzV9rIWats1Ow19vHctbZ0ww7/mSqOcIk45TxctmzZ0gXMcePG2cyZM61evXrWvHlzf665+p///Oe2bNkyv6a0IFxminAJHPx0QNfBXAdyHcTbD5ttLftOs+tenmDndhhtv354mP383nf3loue+cgeHjTTHcDXbd3trwWBXXu+sfXbdtvi9dtt1orNNmnhevtw9mobPH259Zm0xIUihaTH3p1l9/efYXf0nmp/7j7JrnlpvPv/9sT0fYn6RxPztZyW750IT+8kQpPWM3HBevti+WZbtG67ex49Xy6E91Pbk24/tV/avz90HW+XPPux/e7JD+3kR0fYcQ+9V+rzVVlFoTRbCJeIU87Dpdx333121FFHudCXPIi6LvpR4NQV4lEULjU2ZqYIl8DBYduuPTZ23jrrkjjA/qXnZLs0cXDXgT3qILy/5eynRtnf+n3mWo/mrSmcls0N23fbwnXb7LMlG+3juWtci1iPcYus0wcl9sjgL9w+3/Tap9boxXF2/tOj7dTHRka+PnGUq54f60JbXEX7HLUdFSn6ElP78ffdF5oGnT+JfN6KFIXzbCFcIk55ES7jRLhEPti++2tbs2WXLVi7zWYs22QT5q9zXWHvfb7SBk5dZv+etMReG7vQXvzIa0lp996X1vadL+z+ATPsb30/sxa9ptiNr01yLXVRB6VMyh8T5bY3prpWP3XTjZ6zxmav2mI7v/ra38r4qEXp00UbrNuYBfbXPlPtvETwiTqYB+WUR0da/c5j3Gug1+TZxGuk1+yjxD58uWKza50KUxe5wtZTw2dHhoxaj4yw5onXVK2geny+0f6oRVH7pxY27a9a1rT/eh30eiTv0/6Wmm2GW712H9iFHT+yq18YZ026TXSvyT1vTrc278y0p0fMsWdGpi8dE8tpeT1Oj7/+lYlufVqv1v+btsMjnz+uoucP76e2L7yf2n51Yb8+fpENSPwujvhilY1P/H5OX7rJnWaxOvF7u233Hv+dOXgQLhEnwiXylro+1W2lP+wHS3lm5Fz7x9ufuwDYrOdka5wIfw2e+8SFpTr/fN9OaF26uzZfy4mJoKHQcnOPT631oJn20sfzbciMFfbZ0o1Z6VbWgVqBttVb012LZNQ2qFz0r4/dQV+hT93gKzZl7xw0ddWqRa9Jtwllujm1/2rh0zlwUe9zZRWds6cvEfryoJa90xMhKLxd6Yo+X3rMZZ3GuM+e1qPw/WTiC4S+qCiAa5/0ZUYhWq+nvujkkoLa5h1fuQCt4KZtUre1vnjpfFmFan0B0/sf/D3Q/5pWvebrnEctr8fp8VqP1qf1HoxBsDIQLhEnwiVySgcEtZgpPKjbrumrk1wXU9SBs5DKsQ8Oda1lZ7b/0C5OBCi1pqmlSGFO53kpUD048HN3DluH4bPdhQE6H0ytKf0+XeK6y9Siota45ICSaVGQ0brUMnp3v89cK6jOKYva3uRyeSK8RLWGpisK2lHrU9FzKwx1TYQgBb+4W1DVlawAptbAGknnbOa6/Pofw+ycDqPs2pfG213/nuZasnVhiAL/lMUbbFkWL/xAYSJcIk6ES8RCF17owK2uPIUYdUtFHUTDRd1XCiNRIaWyiw7i6i7Tyfw630/dZgp9umhEwU/78dDbn7tuNJ38327oly4EqiXslU8WuNCmA78C3ORFG1x3swLAxu1f+a9Iflu1eadrGVKIVRehgq6C/wUdP3JBJ+r92p/y28ffdy2DCs0fz11rW3bmX+vS58s2uW5eBbnHh8xyLYpqldZ7//fEZ0CfBV3YokCsz4hCqT4zwakK+l/Tqg8+Q1pej9Pjg8+Q1us+Q4nn0Wfo+cTrrSuIx81b584HzXXLIgoD4RJxIlwiLZ0Lp/OQdLD8fSLsqatSF0Som/ektiPs+ApcOamT4XUgVmDThRu6alNdr1t30ZV1MFDwCbo0dQ6puiSXRHRpTlvidWmqO5artYH4ES4RJ8IlytDYar0mLLJbXp+8361UunpS3b11n/jAdeNp+I4ru4y1G16Z6FppdMHG+7NWu3OkAADxIFwiToRLuCtpNXCvunijrtLVBQIaw05dpDq/S+PD6apJdfOqFUpDxgAA8hfhEnEiXBahb7/1zidTN7SGo6n2wJBSYfK0x0a64WB09whdsQ0AOLgRLhEnwmUR0LAcGkPx+VHz3Dh/yePMafgSXXCgq0/prgaAwkO4RJwIlwVELZK6kGLw9BXuylOd5xg1uLLOo/zTqxPdkC9qwfxGDwQAFCzCJeJEuDxIffX1N+4qXA2KrOFRGj4/NvLim+oPDHWDYWsoGQ2PoyFxCJMAUFwIl4gT4fIgo2FfdLcUjROYHCRVdL9g3T9YXdwaFBoAAMIl4kS4PEjojioagPmYB4fuDZK6u4vuDa2wqTua6KpvAACSES4RJ8JlHtPA1LqrjQYsDwKlur41LJAGGgcAIBOES8SJcJlndDrkmJK17pZy4SGCdN5k74mLbdtuxpQEAOwfwiXiRLjMExqMXONOqqs7CJQ1Hh7mLsT5ciXDAwEADhzhEnEiXOaQrtrW+JPNek62avfva6XU7RJ1ZfdOzqEEAGQB4RJxIlzm0K2vTy7TSjl39VZ/LgAA2UG4RJwIlzl03EPv2a8TobL/lGV+DQAA2Ue4RJwIlzmiMSjVYnl5pzF+DQAAlYNwiTgRLnPklU8WuHD58KCZfg0AAJWDcIk4ES5zRIOfK1y+PW25XwMAQOUgXCJOhMscqfPPD1y4XLJ+u18DADjobV9vtnGx2aqZZovHmy36xGzJBLNlk81WfObVr/nSbF2J2YaFZpuWmW1ZabZtrdmOjWa7tpp9tcNfWfYQLhEnwmUOrNy80wXLWo+M8GsAHLCdm8zWzvUO5F8MMpvUzWx0O7N3/2bW93qz1y4zG/p3b54O4Dhwu7d5QUjBaPlUs4VjzJZOSvxR+9yr27TUe421XL7atcVs62ovAK6ZnQh8073wt+Ajsznvmc0caPZZb7PJ3c3GP282pqPZh4+bDbvfbNDtZv3+ZNbrKrNXLzF7vq7Zv040a/8Ls9aHZL989JS/0RVHuEScCJc5MGTGChcuNb4lgHJsWGQ27wOzsZ3Mhj9oNqCZ2etXmr14htnTx0cfkNOV504ze+cOs2m9vEBULHZu9gKVWs/mj/ZC1KeveuFp5MNeGNfr2+c6sx71zV4626zzqd7r/M8jo1/LTIoe+2Q1s2dqeOvTe9ftAi/09/y9F9R6X2P27yZmbzY163+z2cBbE0Eu8R4Nvsv7YqBgNyKxje+39YLeiH949VpGy+uxWo/W+fJ5Zi/UM+t0slnHX3nB7/GfRG9bvpcxz/hvXsURLhEnwmUOPDL4Cxcudd9wFCB1a21bkziQLzFbO8ds5QyzJRPNFnxsNmeY2ReJP/Cf9fFaRia84B1ARj3hHTz3HjD/YtZXB8yrQwfM0xMH51MSB8xfJw7Wv0wcMH8afUDan/JU9cSB/nyzfjd4AWNiV7O5I7yuu90xnbKhLsBlU8xm9Eu8Dv/0tuXFM6O3N6ooOKj1SIGl9x+91qUPHvHWFRSFEe1n1OP1Gug5J77k7ffBQi2Iy6d5n6lPX/H2c8g9Zm/d5AUt7W+nWpXXqnawFn1e9JooeCqAKojq90u/Z3rdFFQVWPV7qN9HfXb02o591vudnfGm18Kp7m61eq5P/B1XS2gldGVnE+EScSJc5sAVXT5x4XLSwvV+DcrQH2q1Ki0a64WdWe944WNKDy8AffIvs9Htzd5vY/befWaD7zQbcIsXEhQwejQw635pqGUkUaeDhrq09raMJEKIWkZ0QHYtI4mDiFpGXBhR0GvltXAlBz0dtCsj6OVjUauTuv7e+EPidbrbe91nDvC6Qw+kzPvQe//0nuk9Stf6qOd/5SKzt2/zWtj0/s8e6p2/pla4/bVnp9f9qdYvvZdRz9nuZ17r3bjO3jlwcVM3v87Jmz/KbNobZh8/7b322iaFoANtsX0s8XdPj9X7qd+Nfzf2XtfhD3ndr/qiM/V17/QBtWzqC9HqWV5Xd0VeB+2PgrDOL9T6FIgXj/Oeo+R9L6h9Odj70qXgpi9eU3t6QU6BX13TarnW+//Rk97vp35WvZZRF7ZaYbUevbfq4lboU5e3PiMKfuoKL3KES8SJcBmzPd986271WO2BIbZrzzd+bZHRCexLP/UCo4KGAqKCocKGuiwr0gWXD8W1jFT1DuRqOdLB/OVzvQP66w29g7palxRuFWx1cFfYUYgY38XrqnQHzESIU5DSQVjnE6o7UwdMdRVvXeV1c1aU1qUWGD2fDtzaJr0P2u6ofaus8uxJ3pcAhfopr3n7G9f5kQojeu31/AfTlwSF4C61E1+grkh8WWpu9sGjiUD8nBdI9bnRa6iQqs8Kih7hEnEiXMZs8qINrtWyQefEAf1gopZEtSKWjEzdiqgWhVRF4Urdj1EHyVRFrYJqJdTBs8+1Zm/+2eztFmbvtjQb9oB3MI16rmyUXAS9fKR9Uyuh9l/vs1rQ1IqpoHygRef1qaVs9hCvJSvfqGVNrZZqKdT2qrWw61le96nCnLpS/1XT+3x2ONZruX7i6OwF00f/x1u/LhjROYj68qGWu8/f8n4H1y/wNxTIHOEScSJcxuylj+e7cNnmnTw/t0thSWFKIe6lc6IPggdadDBWt7K6mdXtrDCn7i11f6s7S+crAgCyhnCJOBEuY3br65NduHznszwbPF2BTuctqZtW3bhRobBLndKtiOqKC7ciqjVKLSyTXvbO3VJLy5fvelf76vwtXeACAIgd4RJxIlzG7KS2I1y4XLYhx1cW6uR6DcWibuZnf1M2SLY5zBsy5L17vW7wHRv8BwIADjaES8SJcBkjBUoFy9gHT1erpM6V1JA3amFMde7jKxd6F9fMHe4NpwMAKAiES8SJcBkjdYUrXN7yeiUOnq7he3TxhUKirn5NFSQ1LMlrl5sbX1HjLwIAChbhEnEiXMao9aCZLlx2zdbg6bqqVVdsa6BftTqmuguFrj7V3Tbe+at3PqRu1wYAKBqES8SJcBmj+p3HuHCp4YgOyDd7Ejsw2LtVWtvDo4OkxlfUrds0fImGC9I4d98W6XiaAACHcIk4ES5jogHTf3GfN3j6nq+/9WszpICoO8hoCJ9wkHzmBG9A7tHtvGGDNDg5AABJCJeIE+EyJhMXrHetlr9/LsPB03XLNN36rOvvSgfKDsd5rZKrv/AXBACgfIRLxIlwGZPnR89z4bLt4HJCobqvda9d3f/60Sr7AqXu/KHbI+pew3RxAwD2E+EScSJcxuTmHp+6cDl4+gq/JmT9fLP323r3og4CZdsfefeh1nmTuvUigHKt27HOFmxaYNNWT7PRS0fbO/PesddnvW4vfPaCjVw00jbu2ugviWK3fc9227J7i23YucHW7FhjK7ettKVbltqizYusZGOJzV4/275Y94VNXzPdpqyeYhNXTrSPl35sIxaNsHfnv2tvzX3L3pj1hr3y+Svu89VxckdrN7GdtRnXxu4fc7/9bdTfrMX7Leym4TfZjcNutL+M+IvdOvJWu/2D2+3OD++0v43+m7X6qJU9MOYB+8fYf7jHPTb+MXti4hP25KQn3fqenfKsTV6VvZFFCJeIE+EyJsHg6Ss2hYKi7lOtq7yDQKmibnDd13j7On8hoLht2LXBPl/7uT09+WlrPba13TXqLnfAvuqdq+z8N8+33/b6rZ3w2gkZld+//Xt7dPyjNnTBUFu9fbX/DBAFLQUsBSsFKoWp9xa+54JUjy96uBD11KdPuSCkYKTw9Odhf3YB6pYRt9ht799mf/3wry5Yab5CVhCcHp/wuAtfevwzU56xzlM7u/W9NOMlF9C0foW1vrP7uud7u+RtF+L0/Ppi8OGSD23MsjE2bvk4F/Q0PXj+YPv37H+7xz837TkXzB765CH3/NqeG4beYFe8fYVd+NaFdnqf0yM/D/leus3o5r87FUe4RJwIlzFYtG67C5a1H3/fr/E9U8MLlB1/5Y1LuXaOPwP5buvurbZ2x1pbtnWZzds4z7VyTF091SasmGCjloyyYQuH2dvz3nYHy55f9LSXZ7zsDoAdPu3gDrQ66Lb6uJVrxVCLhg7STYY2sWvfvdYaDW5kVw660hoMbGCX9r/UHRzP63eendX3LDujzxlW5406dmqvUyMPRvtbzuxzpjUc1NBtw8NjH3bb2G9OPxcsZq2b5VoDK5tajfTaKUzoYKowou3R/u/PfipA6PXSa9j0vaalyjXvXhP5mAvevMAFoT6z+7gWq4OdWmfV+jZj7Qwbu3ysDZk/xHp/2dtenP6itZ/U3h4c86BrPVPwUtA+u+/Zka9LoZdTXj/F/R7pM6Pfq3P7net+z/T5qT+wvvv9a/ROI/dZ0u+lPkMKrArP+rzo91ctjQrLnad1ti7TulRKUcjPFsIl4kS4jMGAqctcuGzeK/SHYsMiL1iqKxxl6CA5f9N8m7JqimuxeH/x++5A2X9uf3ewfPXzV+3Fz160f035lztoth3X1nUxqbtJLSg3D7+5TMDYn6KDikLXZQMuc8FOoS5bge5gLGohbDyksWs1VDiOOhBmUtRipfdJLY86mEc9V7hoGbWMKXCqlUthXa1Z41eMd4FeLW3q3szErq93uccpQCvMRz2fwoYCv7rTFa5zTV9iFBYVvvU7oOCvoKj3QJ91vY4K4fp8Ru3P/pR6veu511utfQqfzUY0c62Aag1Uq2BlhCi1YiqgqVVT+6T3WcFNrZ4KcdpHBTr9Tivc6bOg906tpn//6O9ueT1eraBq/VSrpz4f+nKkYKZWWH1G1Cqr97+YES4RJ8JlDB4c+LkLly9/HBo8XYOfK1z2beJXFD618k1fO9217CkkqjVPwVAteDonSWHuYGlJqf1Gbfvdv3/nWr4uH3i5Xf3O1S586eCnA1/LUS3tvo/vcwc/HZh1DtXznz3vuvB6zeplb85503Xr6Ryuj5Z+5Lr6FKR1jtfMdTPdQVEtaQoWOjiu2LbCnRumg6TClM4Zy8bBctX2Va6VS8FFob3T1E6udUvvh1q26vSuE7n/2SxqKWr+fnN3rpm6RvVlQvsdBwUQhVYFqdN6nVZm29T1HvXlozKLWq71hSZ5WzIpao1TQNQ69AVL4UxfvHT+nr6Q6fdO3cyfrvrU5myY41qNd+zhnO5iQLhEnAiXMbjk2Y9duJyyODR4+ls3eeFSd8zJc+t3rncXSKilUCFJYUmhSeFJIUrBQKEq6kAZlKgDYXlFB0l1UQUtKHd8cIdrqXjwkwfdOXMKImoF6zq9q2uxULfmwJKB7lw6nY+lLsFJKycdcPlszWf25fovbeGmhS7Y6TXY9tU2/xUpLgqxCnsKJGo97j6ze2QrVCZFAVutS3qNFWzyjc7tfG3ma661TC15UZ/NOIu6bxUW9cVFX1rUiqigqM+8vpyoJVafU84fRTqES8SJcFnJUg6eHtzzO8/Os1Q3nIKZWnN0bpbORYo66B1I0bp0HpO6t9Ttpe4steTp5H09p7o58zFwAGrdc1cX79p3dbFa4hW6dc5t8tXFyV9W9qdoHWqtVus0kC2ES8SJcFnJxs5b51otr+wy1q9JUKBUsFTAzCG1SGmoC7WCqPvs4v4XR4ZClevevc61WKorWxeo6Nw3XbCiC1fUza0LWXRemA6wOtjqwKsLXhRWAQC5RbhEnPImXK5fv95Wrsy81WrhwoU2ffp0fypzcYfLzh+UuHD56LuhiwPUFa5w2f9mv6LyqWtXQVIhUF3LOmk/KkSq6HwtXTmsiwfy4aIGAEDFEC4Rp7wIl/fcc4995zvfcaVRo0Z+bbQ2bdpY7dq19y5fvXp1u+OOO/y56cUdLv/cfZILl0NmhAZP10U8CpdTe/oV2aPuOV0tqasndbWluqHLu8pZV0Pf+/G97upYDT5d7FdUAkAhIlwiTjkPly1atLAaNWrY5MmTraSkxM4++2xr1qyZP7cshctOnTrZzJkzbc2aNdalSxcXMlWfibjD5a8fHubC5crNO/2ahHY/88LlxiV+xf7T1cUaQFjnLeocxnTDumigaV0UoCuBdZ6jWjCL9QIVACg2hEvEKefhskqVKta5c2d/yqxXr14uLCpsZqpatWp2xRVX+FPlizNczl+zzQXLOv/8wK9JWPm5Fyz/daJfkRm1KOoqaLVGRg2ZEpSL3rrIXb2tIX7Ura2u8DgGwgYA5C/CJeKU03CplkcFyQkTJvg1HtX17t3bn0pPYVFd65mIM1y+OXmpC5e3vREaPH3cc164HHS7X5Hazj07bfjC4W4gYw1JEg6RuvJa40NqeCAtM2s950YC8s327bYn8bdl98KFtnPmTNs+6VP7ev16fy6K3Tdbt9qedevsq2XLbNe8ebbzi1m2Y+pU2zZ+vG0dNco2v/eebRr4tm3s29fW9+hh67q+ZGs6d7Y1HZ+x1U89ZaueeMJWPfqYrWzdxlY89JAtv/c+W37P321Zy5a29PY7bGnzFrbkL81s8Z9vtEU3/MkV/ay6Jbc2Tyxzuy278y5b9re7E4+911Y88KCt+MfDtrJtW1v12OO2ql079zx6Pn12s4VwiTjlNFxOmTLFBUldzBOmug4dOvhT5Wvfvr07BzN5HYGPPvrIfvWrX+0tauU87LDD/LmV677+M1y4fGXMAr8m4Y0/eOFyel+/ojQNP6KxBHWXkORAqQG7Nb6jzo38NvEPhefbnTvtm23b7OtNm9wBcM+qVfbV8uW2e/Fi2z1/vu2aO9c7GM6Y4Q6IOvhsnzjxwMqkSbarpMS+3pzZHW7i9O2uXbZ70SK3jZsHD7Z1r7xia/71rK185BFb3qqVLb3tNlvctKktbPQHm3fJpTb3d2fZ7FNOtVnHHZ+ylJx/gTugb+j1RuI1/MJ/JgQUwL9assR2Jr6Ab5882baO/sg2DxlqG/u9aeu7v2Zrn+uSCD7tXRDS67jkllts0fU32KIm1x9Q0fundSz7653uPdV6Vz3+T1v99NPuuda93M3Wv/66e/5N77xjW0aMcNukELjlgw9s06BBtqF3b7fcmk6dXDBbcf8Dbn2Lb7rZFl3X2OZfXt9KzjnX5vy2duRnIt+Lgm22EC4Rp5yGy2nTpqUMlzqvMp1hw4a5ZQcnDj6pbNiwwcaOHbu3qEVUXfFxuLDjRy5cTluy0av4NhEIH/+pFy637hv0WMP1aFgfjSt58usnlwqUGh5IA5frLioo39ebN9tXK1a4ALYj8dlSMFEA2zF9ugsTu+bMcS0VCmpqtVBwU4D7euNG15rxTSLYBRTytL49q1e75d06E4Fu+6eTbdsnn9iW99+3ze8OsY39+9uGN3rb+ldftbUvvOACkFoeVrZpa8vvu99rzWiRCEI33pQ4oDaxBVddbfMvu9wFnbln/s7mnPZb+7LmbyIPLHGWL39zks278EJ30FdwUMuJAoXChYKGQke27Fm50r0nW0aMdEFvzTP/cq0/at2Zf9llNvvU0yK3MdtF+6xWJT2/Qove74OdPssK5Xp99TlVMNdrvPb5523VP//pXme1nrnglXit555+RuRrU8hl9km1bE6dulZy9tk276KLbcHvr7CFf/ij+yyodVGtj2qJVKukWihXP/mUC6+u9TLmot+9bCFcIk45DZcbE38IFQ6jusUHDBjgT0XTBTxarl+/fn5NZuLqFt+2a48LlqUGT1+W+EOhYNn5VDepO57oYpxwmFTRFdy6C0ehDwOk1imFO4W3nbO+dMFt66jRXmjr2y8R2Lr7rSXtbMU//pEIPX/zWksSAWjBFVfYvAsutLn1Ts+LcJbN4g5+p53mDoBzzzjTHQRLzjvfHQgVCHQwXNDwqsQB8Q8uJIRbg/arJB6rQKmQFbUdUUUtQPPrN4heXwal5OxzItcbVbTfC/94jTvY6yCv8L6+Z08X6DcnvlhuGzfefYlQ8Ffrrlp7y7Pz889d0FI3pD4/Uc85/9LL3JcCdYnqy4g+k/u+oMxydWpBLvMFJfHcyV9QsuGbLVu8sJjYBhfG//1vFxRXPvKoLburpQtE+kzosxK1P/tT9EVHrXxq7dNnQ1+I1Aqo1kC1Cq559lnXSqjWQrUa6gvW9sTf7shW8QzKtsSXfbVA6gvMpsTf+w19+ngtpC++6J5LraTqKtb7pS88rrVaLZLX3+C1eCb+HrjWzsRy+juhvxdq5dT69GVBwUx/V/TFSK2y+ntTzAiXiFPOL+jRleLdunXzp8y1Qh566KG2OPHHO5UDDZYSV7j8eO4aFy4bPh8aPH1MRy9cvvs3Nxm+wlv3ce48rbO7328+U5eta81bsMAdrHWQ0EGmVBdVxDdw7yB4uQsXOohFHdzyragFTS07CnYKHQuubOgddP/8Z9f6o4O7zpla+XBr1yq0+umOtrbL864Ld0OvXrbxzbdcy5FCwdaPP/ZaUj+bbrtmz3aBQS14rtV0R+7v7ayucXWRbxs7zh3odYDXgV2trgqxamWNeo0OpMypW88F5CXNmrnWIX0+FOa2fvSRC3AKa3FQ2FAAUSjROXAK2lHbW9Eyu9bJZb8snH9BxJeFP7rPlz5nJWedHbmudEWnBpSce55bhz6n2i99PnX+3rrE31l9JtW97E6JSHwOv1qxMi8+f6h8hEvEKefhsmXLli5gjhs3zg0vVK9ePWvevLk/11xX9pFHHmlLly5106+//vreYDlq1KhSJRNxhctnRs514fKx8ODpPa/wwuUXg2zuhrkuVF7S/xJ3C7l8oxYTHYDUkqAANb/B7yMPZhUtaglzrSVqDWucCG433ey1ljzwoDv/St1ROiiqVcOdd/XBB267dKGGLtjQhRscHOOjLxbuS0WpUw68Fr1MTznIZ9pWBWy1kurCDLW2KvTpdAaFQH1BUihUONQXJYVFhWWFR4XIqM94RYpalfX7oRZctdbp90JBUb+X+n1Q663OkdTvAVAewiXilPNwKffdd58dddRRLvQlD6I+NXEA0y/FqsTBSjRfY2FGlUzEFS6vf2WiC5fvfe7fdeibPWaPVvHC5Y6N1nV6VxcuHxv/mDc/h9R6odCmVjd1PanlI+pAp6KDnbqi1cqz4MorEwffJomD3q2u26pUF1Xi4Le3i+ojv4sq8doHXVRAMXAXaCUCa8oLtBKhXOE8fIHWjs8+c78nuuodyBbCJeKUF+EyTnGFy2Dw9PXbdnsVi8Z6wfKF092k7tWtcDlm2Rg3Xdl0IFPI0zlrm94eZKvbP+mu1pxTu05kiFTXtc5tUuuhzvPaMWWK6xIHABx8CJeIE+GyEsxZtcUFy3rtQoOnj27nhcthD9iGXRvsxNdOdLdl/Oqbr/wFKkZdw2oB0YUOOu9R57LpPMd0F1CoK0/n1K148EE3ppu6BOM65w0AEA/CJeJEuKwEfSYtceHy9t5T/ZqE7pd64XLOMOs/t79rtdRYlgdC57up63n531u588F0zldUcAwXXSCglkiFyHUvvewGC9a5cQCAwke4RJwIl5Xgnjenu3D56icLvYo9u8zaHm7W5lCz3dvsjg/ucOHy7ZLMftF1vpaGX9HVyeWNATjvwovcOG0apmT9az1s64cfuossAADFjXCJOBEuK8G5HUa7cPnZUn/w9PmjvFbLl89z3eDqDle3uLrHU9Gg37piVV3WySFS50nqqlGdC6lubFogAQDlIVwiToTLLAsGTz/2waH7Bk9/v60XLhP/f7T0I9dq2XhIY2+eT+dMbhn5vjtXssyYgsf/yo2Dp2F5NPSLu9MPAAAZIlwiToTLLPtw9moXLq9+YZxfk/DyeV64nD/a2o5r68LlyzNedkPy6CIa3QmjVJhMFN2lRQNYbxo4kKF7AAAVQrhEnAiXWdZh+GwXLv85xB88ffc271xLnXO5Z5ed2+9cFy5LNpbYwmuvKxUo511yqa164gnX1Q0AQLYQLhEnwmWWXffyBBcuh830B0+fO9xrtex+qbtXuIKlAqYGUVag1B0+dPGNpgEAqAyES8SJcJlF33z7rTvXUuFy7+Dpwx/0wuXo9tZlWhcXLp+Y+IQ7f1LhUvekBgCgMhEuESfCZRbNWrHZBcsz2n/o1yS8eIYXLhePt0aDG7lwOX7FeHdvYoVL3acZAIDKRLhEnAiXWdRrwiIXLv/axx88fcdG73zLR6vY6m0rXLDUMETbpk5xwVIBEwCAyka4RJwIl1nUsu80Fy5fG+sPnj7rHa/V8vWG1nd2Xxcu7x59t61s09aFy7XPP+8tBwBAJSJcIk6Eyyw668lRLlzOWLbJqxhyjxcuP/mXNX+/uQuX78552+ac5t1lZ89K/6IfAAAqEeEScSJcZsmG7btdsNQFPbqwx+lS24XLnUsmWq2etdxdedYMH+KC5cJrrvWWAQCgkhEuESfCZZaM+GKVC5eNXvTHqNT5lmq1fPyn9sHi912r5Q3v3WDL7rzLhcsNb/T2lgMAoJIRLhEnwmWWtHvvSxcu2w390qv4/C0vXPa51v4x9h8uXPb49EX7ssYJNuvXNezrzZu95QAAqGSES8SJcJklf+g63oVLtWA679zhwuW3E16wM/uc6cJlSc8XXavlkltu9ZYBACAGhEvEiXCZBZGDpz97kguX0+e87YLlRW9dZIuuv8GFy81DhnrLAAAQA8Il4kS4zAJdHa5geWYwePrW1V6XeLuf2bNTnnXhstPwNjbr+F/Z7Fon27e7dnnLAQAQA8Il4kS4zAKNa6lweee/p3kV097wwuWbTe3KQVe6cPnZ0w+7Vsvl997nLQMAQEwIl4gT4TIL7ug91YXLnuMWeRUDb3XhcvWEzi5Y1nmjjs279FIXLreN9a8mBwAgJoRLxIlwmQWnt/vAhcuZy/0rwJ+q7sLlG5O9LvF2b9ziguXceqebBWNgAgAQE8Il4kS4rKAyg6evn+91iScC5l9G/MWFywkP3OrC5ap27fxHAQAQH8Il4kS4rKDhM1e6cHnNS+O9ismvuXC5c0Azq9mjpv0mUebUrefC5c4vvvCWAQAgRoRLxIlwWUEfz11jVz0/1p4eMcerePPPLlwOH+0NnN722YYuWJacf4E3HwCAmBEuESfCZba1+5kLl/d/2NKFyzG3NHLhcu0LL/gLAAAQL8Il4kS4zKbVs1yw/LZTLXeFeK1uJ9iXtWq5cLln5Up/IQAA4kW4RJwIl9k0sasLl1MG/Mm1Wv6jzTkuWC66rrG/AAAA8SNcIk6Ey2z6dyJEJsLl08NudeHyoz9e6MLlhj59/AUAAIgf4RJxIlxmi4Yh8s+3vLz/pXb68yfYrF/92r6scYJ9vdkf/xIAgBwgXCJOhMtsWTHdBctlL9R2rZYP3emda7m0eQt/AQAAcoNwiTgRLrNlbCcXLnv0b+RdJX5RXRcuN7/3nr8AAAC5QbhEnAiX2dLrahcumw5oYOf9q4YLlrNrnWzf7tnjLwAAQG4QLhEnwmU26HzLx39qWx/5kbsrz2M3euFyxf0P+AsAAJA7hEvEiXCZDUsnuVbLId3qui7xSbV/48LltnH+LSEBAMghwiXiRLjMho87uHB5T9+L7arHvFbLufVO91o0AQDIMcIl4kS4zIYeDeybRLis8/qp9uw1v3bhcnX7J/2ZAADkFuEScSJcVtQ3e8werWIT2v/UfvPqCTb1JK/lcuesL/0FAADILcIl4kS4rKhFn7gu8Xbd69qfH/CC5fzLLvNnAgCQe4RLxIlwWVHzPjTrdr5d1KuOdW/wKxcu13Xt6s8EACD3CJeIE+EyC+ZtnGenvnSCff7r423W8b+yPStX+nMAAMg9wiXiRLjMgm4zutmdLb0LeRY1buzXAgCQHwiXiBPhMguaDG1i/c/3usQ39u3r1wIAkB8Il4gT4bKCNuzaYGd1PsG+SATLL2ucYF9v3uzPAQAgPxAuESfCZQX1n9vf/nGL1yW+9Lbb/FoAAPIH4RJxIlxW0NbdW23qeWe4cLll+HC/FgCA/EG4RJzyJlyuWbPGli1b5k9lZvr06bZjxw5/KjPZDpc7E+tTsJx98il+DQAA+YVwiTjlRbi877777Dvf+Y4rjRo18mtTmzlzpp122mlu+e9///vWpk0bf0562Q6X63v2dOFyxUMP+TUAAOQXwiXilPNwedddd1mNGjVs8uTJVlJSYmeffbY1a9bMnxutTp06LoQuX77cJkyYYD/84Q+ta4YDl2c7XMquOXNsV8k8fwoAgPxCuEScch4ujzjiCOvcubM/ZdarVy/XIqmwGUXhUPOnTZvm15jdcccddsopmXVLV0a4BAAgnxEuEaechkudZ6mgqNbHMNX169fPnypN9d/97nf9Kc+oUaPcY1ZmcGccwiUAoNgQLhGnnIbLKVOmuFC4fv16v8ajumeeecafKq1jx45Ws2ZNf8oThMupU6f6Nfto3jHHHLO3HHnkkfZ//+//LVVX0fLTn/7Ujj766Mh5FO/1UYmaR/GKXp+qVatGzqPwGcqk6PX55S9/GTmPwmdIjTKPPPKIf2QEKldOw6W6tlOFy06dOvlTpSl0pgqXn332mV+zz8aNG23ixIl7y/vvv2/PP/98qbqKFgXWHj16RM6jTLT777/fzjvvvMh5FK/ovOEBAwZEzqNMtNtuu82uuOKKyHkUr/yf//N/bOTIkZHzKBPtT3/6k91www2R84qh9O3b1xYuXOgfGYHKldNwqeCnUBjVLa4DbZSBAwem7BZfu3atXxMvtRakOkcU5i62ymQUgGL2ox/9yObPn+9PIVm7du3SXuhX7BQuk7+oYx+NSnLvvff6UwAqU84v6NGV4skX9Bx66KG2ePFiv6a0pUuXuiCZfEHP+eef70/Fj3BZPsJleoTL8hEu0yNclo9wCcQn5+GyZcuWLmDq/MtgKKLmzZv7c80++eQTdwGOQmWgbt26LqysWLFi71BEr7zyij83foTL8hEu0yNclo9wmR7hsnyESyA+OQ+X8tprr7mhhE444YQyJxyrhfLcc8+1VatW+TXm/oAqrOiArHlvvPGGPyc3+vTpQ7gsh14fFaSm14dwmRqfofT0+hAuU+MzBMQnL8IlAAAACgPhEgAAAFlDuAQAAEDWEC4BAACQNYTLCtLVh/Xr17fbb7/dxo8f79ce/HQB1fDhw+2pp56yNm3a+LVl3XPPPXbJJZfYTTfdZJMmTfJr99HtOq+//nq79NJLI9ezadMmN2KARgm49dZbbebMmf6cfXTBli7gatiwYbnbErf+/fu7/b7qqquse/futm7dOn/OPum2PZP979mzpzVt2tSuvfbayHXoMXfeeadbh/6PWkcu6PXRNqnoc6LPUzKNdduqVatyt137rH2vyP6nW0eu6bOj7YratnTbno39z2QduaBtjSphce1/unUA2IdwWQEKDQqWgwcPdn9sfvCDH9iQIUP8uQc37U/16tXdGKIaVzSK9l8l2H8NCfXee+/5c83V67G625IGv9eQU/rDHFCw0CgBWocGwtc6tIyGmAqoTuvQcEYaA1XzVZdr2gYdiLp16+YC9OWXX261atWy7du3+0uk3/YD2f8jjjii1Dp0dbAek7yO5cuX+0vkjl6fu+++2733urOWRnZ44okn/Lnetp900knlbns29j/dOvJBkyZN3OulEhbH/i9btsw9RnXhdag+1/R6aHuSSyDY9nT7r33Wvkftv5ZNt//hdeh11OsZXgeA0giXBygITuHWugYNGtjNN9/sTxUG/bHVfibT/v/4xz/2pzzaf4WJgIJ3+/bt/SlvzFKta+XKlW5aoSNqHeE/2gpfyevQH/lwAMsF3UY0bM2aNW7f1MoYSLftmey/DnLhdXTp0sWtIzh46mCXvA59KcjHA5/GGdS2B6K2PTjIB5L3X19U9nf/tY7w7WST15FrGoqtTp06keEyk/3X/oaF93/Hjh32ve99r8z+q07zRC3HUetQfa4F4TKVVNu+P/uvZcvbf73Wes2T16H3BkA0wuUBUldm8l2B9Efq5JNP9qcKQ6pwqRAZ1coS7L9aVPQ4PT5MdYMGDXI/B93BYVpH7dq13c+bN29OuY63337bn8of1apV2zuYfybbfiD7rxD7X//1X67LWVKt47TTTvOn8kfr1q3dF45Aum0v7zXMdP+DdSQLryOX1Dqmm0TojmTaj/C+ZLr/2t+w8P7rsan2P1ivnjNqHcmvay4E26ZtjWpJTbXt+7P/Wra8/ddrnWodeo8AlFX2NwYZ0Xl2N954oz/lUStClSpV/KnCkOqP8zXXXFOmlTa8/zpnSY9LPiCoRUCtb6KurKh1/OQnP3E/l7eOF154wZ/KD2rJ0LYGg+lnsu0Huv/HHnvs3lumRq1DB0YFlnygbVHRebn64jFy5Eh/TvS269zVYNtT7b/qMt3/YB3JwuvIpeuuu86djyoKM+FAl+n+a3/DwvufSbhKFdDC25Ir2oaf/exnduKJJ7pt1vTQoUP9uam3fX/2X8uWt/96rVOtQ+8RgLLK/sYgIxdffLHr5gvTH73//M//9KcKQ6o/ztr/5G6z8P7r4iY9Luh6CqhVLjjv7qKLLopch1rmZNy4cWnXkQ+C1yh8gMpk2w90/9WFWt46tB3BOnJN26Kiz4tCwquvvurPid52nbObbv9Vl+n+B+tIFl5HrqiV+/jjj/enyobLTPc//LmT8P5nEq5SBbTwtuTK2LFj/Z+8fVHLt043CaTa9v3Zfy1b3v7rtU61Dr1HAMoq+xuDjNxyyy2u9S5MrVfqGi0kqf44a/91BXRYeP/VzafHTZ8+3U0HDj/8cHv99dfdz7rSOmodwQFX95NPtQ618OWDrVu3ljlPUDLZ9gPdf7VslrcObUs4tOQL3R9cF33t3LnTTUdtu1qJ0u2/6jLd/2AdycLryJWf//zn7vcrKAozKkHoyXT/o4JRsP+pfn9VFzxPqoAWhKt8ou0K70+qbd+f/dey5e2/XutU69B7BKCssr8xyIj++FStWtW2bdvm15i1aNEiL/8gV0SqP87a/+OOO86f8ug0gWD/d+/e7U6CVzdnYOHChaX+qKdaR/i8PC0ftY7kC2pyIei2HDZsmF9TWrptz3T/dTV6IHg/wutIvrBAXaXhdeSLYP+nTp3qpjPZ9uT912kH+7v/Wj587//kdeSKfldSlUAm+6/9DQvvf/B5idr/4PdQzxe1jvB25IsOHTq4vyuBVNu+P/uvZcvbf73WqdYBIBq/HQcouIJQ40CKgoYOcvl2LmBFBX+ck+mK5yOPPHLvgS9q/3Xg+8Mf/uBPedM6XzAwbdo0t+5gHV988YVbh65ED2h8x/Affq0jHBxyRdusbdfBLpV0234g+68xQ8tbR/B+hdeRK+Ft0AVe2hd9IQuGa8pk27Ox/+nWkS8UZpIDXTb2X+eHJ69DdQGdB6vH6LESrCN8fmwuaDuCbRJ9OfnjH//ozt8NBNuebv+1z4Hk/dey6fZfr3l4HXo99d4AiEa4rAD9QdMfoZo1a9ohhxzirtwsFApC2rfkEv5jH+y/zoE67LDDIvdff4QVwjWeoYJl8gHr+eefd+vQQVVdpnreMHWvX3bZZS6U6PEKX/lwEr22N/y6BCW8/Zlse6b7f9RRR7lTDqLWEbxXWofGWk1eR65om/QFJHit1EobjBQQCO9/1LYHX1oqsv+ZrCMfaPtVwuLafz1Gj021jlwIQp5+f/Q3Rj+fddZZZQbjz3T/te8Huv/hdeh1jFoHgH0IlxWkFpkJEya4b9WFRH/YU5WwTPa/pKTEjQcaPoUgTMPraL3JV8WG6Q+5zj1LvrghV5Jfk3BJlm7bs7H/emy6deRC+HVZu3atX1tauv3XPmvfK7L/mawj14LXKVlc+59uHXHTublffvnl3telvO2KY/8zWQcAD+ESAAAAWUO4BAAAQNYQLgEAAJA1hEsAAABkDeESAAAAWUO4BAAAQNYQLgEAAJA1hEsgDwVj+82fP9+v8QT1lUnrTx7MO5d0v/UmTZq4barsfQcAVBzhEshDukPImWeeaVdffbVf41G40t1IKlMcz5EpDaStbdFtNrVd+xsuFUiT77YCAKhchEsgDykQKRgpWA0bNsyvLb5wqW2pUqWKtWjRYr+DpRAuASB+hEsgDwXh8s4777Tf/va3fm3Z4BcVnsJ1wfL9+/ffe3/m008/3c1r27atu+e57v3+xBNPuDoJHqN7x//mN79xPzds2LDMbfFefvlld199za9bt6517drVn7Nv+/X/Mccc435OpVWrVu5+zf/7v/9b6nn0WK07KKnWoe294oor3P3ttZzu+yzJj1cJ6DHaZtVpH9q3b+/P2bf/Tz31lLuX9CGHHFJm/1u3bm2/+93v9q63vP0DgGJDuATyUBDOli9fbt/97nftueeec/VB8AkEAS4sXBcsr2n9rHuUX3jhhW5aXc26J/yIESPcc0yePLnUY0499VQbOXKkm9Y6L7jgAjdfGjVqZFdeeaUNHTrUTStY6jHBdLD91157rS1YsCDlvecVLLWcnmP8+PH2hz/8odTzBM9dHs3v2bOnbd682U3rMQHNS359unXr5rZV4Vn0/9FHH713Otj/YLuCbdA+BzR/9OjRtmXLFjfdvXt39z8AgHAJ5KUgnAU/V61a1Xbt2rU3+ASiwlO4Llh+woQJblruuOMOO/zww23Hjh1+jbnWu6DlMXjMW2+95aZFoVF1gwcPtvXr17ufu3Tp4s/1hJ9X/2uZdevWuekoGzZscMv07t3brzHXOhg8jwTBrjyar3C5atUqv2afqNenXr16ZequueYa10oswf6/+eabblq0jarTNivw62ctF4RLAMA+hEsgDyn8hEOVuq4ffvjhvcEnEBWewnXJy0vyuiXqMQpSYdWrV3etnRMnTnTzo0qwjqjnSKaWUj1GYTVMQVfPI9qWdOvRc9WpU8etS8E5aD2V8H4FwtsbLsHzRO1/EKiD1t3bb7/dvR56X/72t7/ZmDFjXD0AgHAJ5KXkcKZu8e9973sZhcuTTz55b11FwuWUKVPctAThqkePHm54JP2sbuFUop4jWbCecKuq/Pd//7d7HskkXAaGDBlif/rTn9w6x40b5+qiXh+dZ6rzTVOJ2n9to+qSh4bq06eP3Xzzze41V8syAIBwCeSlqHCmC1VuvfVWF3ICOtdPF+oEPv30U7dcclAMi1p3OIQFj3n66afdtGisSYXbqVOnumk9x1133eV+DgvCV9RzRNHFRUErpSgUhp8nXbicN2+e/9M+Wj7oatdV5s2aNXM/B5o2bRq5zuA0gaj91zYGFwolB0zR8nPmzPGnAKC4ES6BPBQVzgYMGOBCTDgszpo1y44//nh3kY6ClLqU9bhshEtdwKJ6FU0H80Uh8Pvf/767Ylrdwpqn5fVYiXqOKIMGDXLnf2rZiy66qMzzpAuXmt+gQQP3mGAbzjjjDH+ud7GO1qlAGV6vroKvVauW/eUvf3H1eu2C+cH+n3XWWZH7r/l6Hk3//e9/t4svvtjOP/98Nw8AQLgE8pKCSxBmwqLqdVGJWup0gY2uBg8voyCUvHzUOlI9plevXu7n8HmMAbX0qVtYwxglLxNeXzrTp093267W0eHDh/u1nqjtT6arzIPn05igwVXjAdU9+eSTZdajsK4hiFSvYZUCQbhUS6Tq9djwWKOifdXj1KKp5bmwBwD2IVwCQEgQLgEAB4a/oAAQQrgEgIrhLygAhChcqgAADgzhEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFli9v8BHM980numNw4AAAAASUVORK5CYII=">
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAArMAAAGJCAYAAACZ7rtNAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAFhNSURBVHhe7d0JuNXUvf5x732ePm2tA7Xyby/iLS2I9aJUxRG1Uue5Tljn4lCqiHVGbK2ibVUUoYJoqVBAEAoKiMiMAjIjg+gBmZEZmQdBHH//864kh7BP9jkBzj575+T78VkPOyvZ2Ul2jnn3ykqynwEAAAAJRZgFAABAYhFmAQAAkFiEWQAAACQWYRYAAACJRZgFAABAYhFmAQAAkFiEWQAAACQWYRYAAACJRZgFAABAYhFmAQAAkFiEWaTS22+/bcOGDfOHKkarVq1cKWR9+/ZNxHLm07Zt26xr1645306F8D0UwjJUhNGjR7v10L8A0ocwi7zbb7/9spZcHZzOPvtsu/TSS/2h+Mo6aGp5CzkYaPkOO+wwa9SokSvZaFz4O6hVq5ZddtllNn78eH+K0u6991479thjrVq1aq7oteoyabtpntm+V42L2oa9e/e2M844w2rXrm3f/e537Sc/+YmdfPLJdvfdd9uSJUv8qTzhZc8s5e1PCxYscNPVqVPHbYdcfp+5nn8gyftsXOXtVwCqNsIs8i44oEaVXB2c9iXMZjtoBstciAYOHOiWe9KkSX5NdkHYDdancePGVrNmTff+yZMn+1N5Vq5caQ0aNCj5Dvv16+eKXqtO4zRNoLzQEcwnLJhXvXr1rEOHDjZ06FB74403rF27dtawYUO3rGHBPKJKeftTixYtXHivDJUZZrNt86jtnURatzjfL4CqiTCLvMvFAXXjxo02bdo0mzdvnn3++ed+7S65CLPZbN682T788ENX9Lo8mveKFSv8oXjK+wxtXy13HEGYDevUqZN7/x133OHXeIL59urVy6/ZRXWZ32152y9z+oULF7q6zOUJfPnll3bffff5Q57MeeyJqHUPaD/S/qT9SvtXeWbOnGlFRUX+UGn6nMxts3btWn+oNI3bk30j2MZlbfPwttq0aZNNnTrVPvvsMzccZc2aNW6abH9X+6q8v9tMUesEIH0Is8i7OOHjlFNOsfPPP98f2kVhQe9//fXX/ZpdASsoOjWt1rywzDAbvCdTuD4IBZklOKDqdeZ6qPUwc3rVhQUBSuFPrYLBdOVtk0B5n6F5Z44va97B8mTS+8L16ltao0aNMn8UaJym0bRSVrCSzGW74oor7LjjjrPt27f7NeUrb/2y0fsyS7Cc2n+0H4XHZX6GhlWvfVHdFKKmCdO21PjHH3/cDjnkkJL5vvTSS/4UHk2jVulgvIq6V2RSvaZVCeYXtW+oBPRa06v1PRinZR8yZIg/xS7XXnttyTQqWqZwWI/az4KS7fsO03KE3xP1dxvsm+F11LzL268AVG2EWeSdDkI6OJXl6aefdtNt2bLFr/Hcdttt9tOf/tQfMhs5cmTJ/DZs2OAOtjpQK1CFT3fvbZgNHzTDRYLPDQwaNMjV6fPVF1MlCA0aFwgO0AoLOnirD2iTJk3cdCNGjPCnihbnM7R8wXpkLnOUYHnCdOGY3t+2bVu/xmzChAmu7vnnn/drStM4TaNpRZ8bLEcUjQtvw1/84hduffZE5jzi0jIF6x5sIxXtN9p/tBzan7RfBdtT+1sgqKtfv35JIM22nqLP0T5Y1ncnmm/nzp1dK7U+u2fPnm6azO9IdZrf1VdfbRMnTnTLqjCrZdC48DoFVK/P1GfMnj3bhdigv3BYsG76V9tD+6n213PPPdefYtffR7hcddVVbn4fffSRP1W0uH+3wfdzwQUXuK4z+lsJPitYRwDpQ5hF3ukglK0Eli9f7oZ1sAv7zne+Yw8//LB7vXPnTqtbt64LE2E63Zv53r0Js1LWQTPzMzR/1c2ZM8evMfdadeHP1sFZdeoHGtApZdXdc889fk20uJ+Rbf2iBIFB71E577zz7Mc//rGdc8457vRv4OWXX3bzVB/ZbDRO02haKS90aFx4Gx544IFZLyQLlzDNI1spT7DuYcG2034UplZ07W/a7ySYTmEzjuB7L++7ixJ8lk77BzSsv4fMrgraPhqXuZ1E9SeeeKI/5FG3DdUHITIImnfeeacbDgQ/pLKt75/+9Cf73ve+Z+PGjfNrou3J322wzRYvXuzXeMpaRwBVX7yjG5BDwQFLB6LMEqZQdcQRR/hD5lqr9N4gZAT9K++66y43HKYr7MMhpTLC7M9//nN3AVQm1WlcQMsVHg6oPrzMUeJ+Rrb1ixJ8blD0vurVq5da52D7q4Usm+DCM00rZW0/0bjwNjzooINK9YkVTRcuYcE89BmZpTzBOodpWPtPposuush9lvY7Cbbx3Llz3XB5NN9s351aM8PUMt68eXPXIhksoz5L/VcDGj7rrLP8oV203hoXtf6qb9asmT/kyZw++J61fpnl9NNPd2cRMnXr1s29J9zCrFCcWaS8v9ubb77ZH/K22eGHH+4P7VLWOgKo+gizyLvgQFme7t27u2mDU+86xRkOA2Ud0IKDf6AywmzmcCBznkE4yZStPizuZ2QOlyXzc3UqV6FF7w+3BCpIqe6FF17wa0rTOE0ThK5g+0X1yVS/Wo0Lr4/6Zep0czZR65U5jz0Rtc01v6jvIfjsYF/Yk20smmfUcmbO59Zbb3XDaqHW/W/1eUE3lPB+qOGo+QXbPDxtIOo9mdNrvH4YBdsmqoSNGjXKvb9Hjx5+jadPnz6uPig/+9nPXH1Zy5c5/6jPk7LmAaDqi/9/XiBHdBCKOghH0WnU3//+9zZ//nz3vn/961/+GLOlS5e6ugceeMCv2UWnq8OhKDPMPvvss+69madoFZhVHyjroKn68HooiEWdLlYA17hAtgN0tvowzSdb6174M7Rc4fUoS9Tn6h6zer/6QAZ01btu2VXWKXGN0zTBFfKLFi1y83nuuefccFjQB1c/WgJ6v05Vhy80CotaLw3H3Z8yRa279hvtP5m0bPos7XeyJ9tY9DnlfXfLli1z88zsbqILwFQf3g+zrfee7LOSOf1rr73mhsPdYLLR96RWfPVxz/TVV1+VKlLe3224e0PU9yNlrSOAqo8wi7yLOqBmo1OOOvX497//3b1v69at/hiPgoD634XpYhhNq9tLBTLDbNC3MzgdLjNmzLD999/f1QeCMBZ1KyrVh9ejadOm7gKWsClTprjpNC6Q7QCdrT5M89H8NN9A1GfsSdDK9rna9ppH+MlpuiBMdXqoQSbVaVz4ojH1j1TYiZr/Qw895KZXy15A97VVXTiYh0Wtl4bj7k+ZotY9uC2Z9qMwfbfhMLon21j0OZo+qs9s8N0F+1vLli3dcOCkk05y9eHwpuGo9Q6CXpx9VjKDoW6VpWF1q4gS9KPWhVunnXZaqdu3xRH37zbq+5HMZQaQLoRZ5F1wQM1WwqZPn+6m1xOm1Ic206xZs9x4taapn6Fush/VepkZZoMr1oNl6dixo5166qnuterC1J9RfRfV11DjgwNo8N5AEAJ0gNcFUCpaFtVpXCDbATpbfVjwGZpvWZ8RtR7ZZPvcDz74wM3jN7/5jV/jCVoJ9RkKTCrB50XdQioIwApsCioDBgwomT7q9mvBOC2Trs4fPHiwe19woZJKmIb1nmylLNnWPWgt1f6k/Ur7lz5H+1tA885clrLoc/QePfhBF1Gp6LXmEf7u1BquaV955RW33hdeeKFdd911brpweAvWO5O6iWhceJ8NRL0nKhiqJV11wXem70BnM7RcwXRaTn2G5pdZwvOKEvfvNtv3E7XMANKDMIu8Cw5QUUUHwkyqP/PMM+3NN9/0a3anU50KBQcccIC7WEQH2XBfT8kMs6IDYfA+fYYOqvp8vQ7TRS1qRVRLlcYFB9Co5dWV3Aqzhx56qCt6nXl1t96X+RmSrT5TnM+IWo9syvpctc5qXLg1UV599VUXRNSlQEWvVZeNlk/9cPUDQq3fupdsly5d/LGl6UEBOg2tz9apZ91dQVfhq4U+8yb/wfJHlaj9KSyYLpP2H+1H2p+0f2g/yez6sCfbWILl0X6mgKowph9Qmd+dTu+rj+yPfvQj9whfvUf7nN4fDm/B/KLoB0Z4nw1EvSdq3hLez3SPVy2LuhOsXr3ajdd7spXMeUWJ83cbzC9TtmUGkA6EWQAAACQWYRYAAACJRZgFAABAYiU2zLZo0cIuv/xyd2W7+prFoT5Z6qenPne6cjZb/zIAAAAkQ2LDrDr7BxdCxA2zumBEF6boynXd8kehNnzbFwAAACRL4rsZxA2zwa1fws9X122DMm/9AgAAgORITZjV7W++//3v+0Oe4L1qqQUAAEDypCbM6kbt9evX94c8wXvDNygP6NGauudjUI4++mgXhsN1FAqFQqFU5XLQQQfZiy++6B8ZgcJEmC1+rx5bmunTTz91T7kJygsvvOD+qMN1FAqFQqFU5aKGnG7duvlHRqAwpSbMltXNIHiCTVk+/vhj99QhAADS4le/+lXWpy0ChSI1YVZhVNOFH0GpuyHEvQCMMAsASBvCLJIgsWFWITYoCqnB68CUKVOsXr16u13cdcopp7hbcy1ZssT1idUzwOPemoswCwBIG8IskiCxYVatqrrXbGYJTJ061Y455hhbtWqVX+M9NKFhw4YuxNauXXuPHppAmAUApA1hFkmQ+G4GlYUwCwBIm3yH2e3bt1MqoXzxxRf+Fk8mwmxMhFkAQNrkI8x+8803rovgvHnzbM6cOZRKKuqCuXHjRv9bSBbCbEyEWQBA2uQjzCpQ6Zi7YcMG+/LLL+2rr76i5Lhs27bNdctcuHCh/y0kC2E2JsIsACBt8hFmly5dypM582DHjh2uhVb/Jg1hNibCLAAgbfIRZufPn+9aZVH55s6da5s3b/aHkoMwGxNhFgCQNoTZdCHMVnGEWQBA2hBm04UwW8URZgEAaUOYTbZevXrZQw89ZBdddNFu9+LPhjBbxRFmAQBpQ5hNNgVYPSAqeFpqeRRmt2zZ4g8lB2E2JsIsACBtCLPlU1BUYNRTRrt06bLb00UHDRpkbdq0sd69e5dap8ynkAbzCZs1a5Z17NjRFc1f4zOnGT58uLVv395NM23aNL92d4RZOIRZAEDaEGbLp6CoFtB69eqVtITKtdde6wJkkyZNrFatWlajRg0bPHiwGyeZ4TIzcGraQw45xL1X89Aj+sPzF73WNDfccIPdcsstVqdOHRdsMxFm4RBmAQBpUyhh9oqOEyq1XP3yRP+Ty6egqCD74osv+jXmWmgVHidPnuzXmF111VUulAbKC7OaVu8JTJo0yY4++uiSMDtjxgzbf//93fsC+lyF5sx7xe5JmKXPbBVGmAUApE2hhNkT/jbSfvrw25VW9jTMHnjggf6Q56abbrKTTz7ZH/K0bt3aateu7Q+VH2br1q3r3hNWvXr1kjDbuXNn11L7+OOP71YOPfRQ1yUhjDALhzALAEibQgmzH67YbNOXbqy0Mm/NVv+Ty6egqFAZdumll1qzZs38Ic/AgQN3C73lhVlN269fP3/Ic/zxx5eEWfXFPfvss+2JJ55wRfVB0bzCCLNwCLMAgLShz2z5osLsww8/7FpRP//8c7/GrGXLlnb66af7Q+b6woZD59NPP71b4NS0zZs394fMVq1a5cYHYVYXl2l4zJgxbrgshFk4hFkAQNoQZssXFWYVCnW6X/1elyxZYl27dnXh9dlnn/Wn8LoiKJiuXLnSzeOSSy7ZLXB26tTJvUfvnTdvnvsMlSDMisZrONw3t2/fvv4rb9mConmHh6MQZqs4wiwAIG0Is+VTMMwMs6L6hg0b2gEHHOAuEGvXrp0/xqPbaDVt2tSFzFNPPdVNn9l6qqBas2ZN9/6zzjrL3dFAITesRYsWri+u3qvSuHFjf4z3/qA+XAizKUWYBQCkDWE2fxYtWuS/8ixYsMAF0ZEjR/o1FY8wW8URZgEAaUOYzR+1nqqVtU+fPq4/rfrgnnjiif7Y3CDMVnGEWQBA2hBm80dhVg9AUL/bK6+8cre+srlCmK3iCLMAgLQhzKYLYbaKI8wCANKGMJsuhNkqjjALAEgbwmy6EGarOMIsACBtCLPpQpit4gizAIC0yUeY1S2oCLP5QZit4gizAIC0IcymC2G2iiPMAgDShjCbfMOGDXOP0R0+fLhfkx1htoojzAIA0oYwm2x6YtgPfvAD97AFvf75z39uI0aM8MeWRpit4gizAIC04QKwZOvZs2fJtly5cqXVq1fPGjZs6IajEGarOMIsACBtCLPl05O6GjVq5J7QVbt2bdcCKmoBPf300+3AAw+0Y489ttQTvPSesGA+YS1atLCaNWva4Ycf7t4ffE5gxYoV7ulgyiea7uabb/bHRNNnVKtWzR8qjTBbxRFmAQBpUzBhdvWHZiunV15ZN8//4PIFIVTB9d1333V1CoWHHnqoexTtkiVLrGvXrlarVi3XdzUQhN6A5hOu07R6j967ePFiu/7660uF2XPPPdcef/xxKyoqstmzZ9sTTzzhwm02eu+ll17qD5VGmK3iCLMAgLQpmDD73BFmjx9UeaXL+f4Hly8Iofo38OCDD9ohhxxiO3bs8GvMmjdvvtsp/vLCrMKx3hNYtWqVGx+EWX0vGlZ9YOjQoa5OXQoy6X01atSwyZMn+zWlEWarOMIsACBtCibMdmpUuWUPw2xm94ALL7zQbr/9dn/I069fPzvggAP8ofLDrLon6D1hxx9/fEmYfeGFF+y4446zE044wU4++WQXlM844wyrW7fubsFadCcDzXvatGl+TTTCbBVHmAUApA19ZssXFWbVveC8887zhzydOnVy3QYCmWF24MCBu9UplLZu3dof8lSvXr0kzGbOL5u+ffu6+erf8hBmqzjCLAAgbQiz5YsKswqcCpCLFi3ya8xuu+02u+iii/wh7wIwBdKAxofDrALxVVdd5Q+ZTZw40Y466qiSMDty5Eg3vfrUZhMsR5wgK4TZKo4wCwBIG8Js+aLC7Pbt210XgOACLgXTzOCpfrV16tSxu+++212U9eSTT+4WZgcPHuz63Woeer+CrLoRBGFWzj//fHe7LdUNGDDA/asuBxJ0W9CyZZZsCLNVHGEWAJA2hNnyKTSGA2ZA66DAqv6zanXVdJmCIKwW2qj5zJo1yzp27OjKRx995C7geu211/yxHt1L9qabbnKtvnq/tp8E84sq2RBmqzjCLAAgbQiz+aMwGt4OGlZLa3kXce0LhdktW7b4Q8lBmI2JMAsASBvCbP4E4VUXgulhDEGXhVwizFZxhFkAQNoQZvNL96lVqFV3g/A9a3OFMFvFEWYBAGlDmE0XwmwVR5gFAKQNYTZdCLNVHGEWAJA2hNl0IcxWcYRZAEDaEGbThTBbxRFmAQBpQ5hNF8JsFUeYBQCkDWE2XQizVRxhFgCQNoTZZNPTxcp64lcmwmwVR5gFAKQNYTbZ9ibM8jjbSqZnHuuJGHpWcePGjf3a7PSFnn/++XbQQQdZvXr13LOS4yLMAgDShjBbPj3UQEWGDBmyW3gcNGiQtWnTxnr37l1qnYL3BMLzCehhCR07drS+ffu64ahphg8fbu3bt3fTafowwmyBa9asmQukekbxggUL3BfWtGlTf2y06tWruy915cqVbgfTY+LifsmEWQBA2hBmy6dwqQxy+umnuwY2vZZrr73W5YwmTZqUNLwNHjzYjRONC9N8wnWa9pBDDnHvVYPdhRdeWCqc6rXec/LJJ7txmn7gwIH+WMJswVMw7dChgz9k1rNnT/eFKtxGydxJJNgJ4iDMAgDSplDC7CPjHqn0Epfyhc74tm3b1q8x69Kli8sXkydP9mvMrrjiChdsA5n5IzOnaNqrrrrKHzKbOHGi/fCHPywJpzNmzLD999/funbt6oZFLbSnnXaaP0SYLWhr164ttZOI6nr16uUPlaZfLsHONm/ePNeye/fdd7vh8hBmAQBpUyhhtlGfRnZ0t6Mrrdw89Gb/k8unEPqd73zHlixZ4teY3XrrrXb00Uf7Qx6FyiOPPNIfKj/M1q1b11q3bu0PeX70ox+VhNNu3bq5XKL3hYvmoVAqhNkCNn36dPdlZe7sqlPflGwWL15sN9xwg9vBfvCDH5TaScLGjx9vJ554Ykk55phjrFq1av5YAACqvkIJsyM+GWFvLXyr0sqElRP8Ty6fAqRCY1hUiAyCZiD8WjLHH3jggdavXz9/yHP88ceXzFf/nnXWWa7o88JF85Ko5SgLYbYSzZw5M2uYVRN7lE2bNrn+LOprO2DAAHv++efL/JLXrVvnOlUHpXPnzq5rAwAAaUGf2fJFhdl77rnHnf0Ne/rpp61Bgwb+kLkGsqKiIn/IXGNcOMwqszRv3twfMlu1apUbH+QWXRT2/e9/3zXwZUOYLWAKpvpCo7oZ9O/f3x/anQKswqi6KARatmy5245TFroZAADSJh9hVt0Akx5mR4wY4fJF0J9VXRAuuOACe+ihh9ywXH755RZ0fdSZY80jnEmeffZZd/GX5rFo0SK75ppr7IwzzigJp7ofrM4yazjo4qDtpmuIAoTZAqdfPGotDejuBAcffLAtXbrUr9mdOmPrSsIwfcHacYK+JWUhzAIA0oYwW76oMCs6U6yMoUCqf3VBV7hfrTKIcskRRxzhxqulNRxmpUWLFlazZk07/PDD7dFHH7UTTjhht+6UupuT7nKg96nVV/8ee+yx/lgvzKouswTdEDIRZivZfffd5wKtru5TM33Dhg3tzjvv9Meaa7VVR2vdhisY1hf40ksvudbZMWPG2CWXXBK5A0YhzAIA0oYwu2+0Hsof4RCbScEyfNY4G7XeKseo8S6TQm22uzntCcJsHqibgH6xKGRmPjTh/ffft+OOO871MQnoyr+LL77Y3UKjdu3a7r60cVplhTALAEgbwmz+KOSqi4Ea33r06GG33367u4BdXS1zhTBbxRFmAQBpwwVg+aMwe+WVV7ozyPpX3RIyrxWqaITZKo4wCwBIG8JsuhBmqzjCLAAgbQiz6UKYreIIswCAtMlXmF2/fr0/hMryzTffuKxDmK3CCLMAgLTJR5hdvXp1mVf/Izd039o5c+bYV1995dckB2E2JsIsACBt8hFmP/vsMxeqFi5caJ9++imlEsqyZcvcNte/SUSYjYkwCwBIm3yEWdmxY4cLWWqhVdE9Viu6BPPe0xI1r8oquVqW5cuXu5ZZdTVIIsJsTIRZAEDa5CvMAnuCMBsTYRYAkDaEWSQBYTYmwiwAIG0Is0gCwmxMhFkAQNoQZpEEhNmYCLMAgLQhzCIJCLMxEWYBAGlDmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBITZmAizAIC0IcwiCQizMRFmAQBpQ5hFEhBmYyLMAgDShjCLJCDMxkSYBQCkDWEWSUCYjYkwCwBIG8IskoAwGxNhFgCQNoRZJAFhNibCLAAgbQizSALCbEyEWQBA2hBmkQSE2ZgIswCAtCHMIgkIszERZgEAaUOYRRIQZmMizAIA0oYwiyQgzMZEmAUApA1hFklAmI2JMAsASBvCLJKAMBsTYRYAkDaEWSQBYTYmwiwAIG0Is0gCwmxMhFkAQNoQZpEEhNmYCLMAgLQhzCIJCLMxEWYBAGlDmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBITZmAizAIC0IcwiCQizMRFmAQBpQ5hFEhBmYyLMAgDShjCLJCDMxkSYBQCkDWEWSUCYjYkwCwBIG8IskiDRYfbBBx+0WrVqWY0aNaxx48Z+bdlatmxpRx55pO23336utGrVyh9TNsIsACBtCLNIgsSG2WbNmlm9evVs2rRptmDBAmvUqJE1bdrUHxtN4w877DDr16+fGx49ejRhFgCALAizSILEhtnq1atbhw4d/CGznj17upZWhdsoqtf4QYMG+TV7hjALAEgbwiySIJFhdu3atS6YTp482a/xqK5Xr17+0O769u3rxs+aNcu16qpbguriIswCANKGMIskSGSYnT59ugumGzZs8Gs8qmvTpo0/tLt27dq58epf26RJE7vhhhvskEMOydrNYNy4cXbccceVlKOOOsqqVavmjwUAoOojzCIJEhlmZ86cmTXMtm/f3h/aneo1/uWXX/ZrzO69915Xt3HjRr9ml/Xr19s777xTUrp27eq6NgAAkBaEWSRBIsPspk2bXAiN6mbQv39/f2h3qtf4lStX+jVm8+bNc3XZ+tmG0c0AAJA2hFkkQWIvANOdDDp37uwPmbuw6+CDD7alS5f6NbtTvcaHLwALuh4sW7bMr8mOMAsASBvCLJIgsWH2vvvuc4F24sSJVlRUZA0bNrQ777zTH2s2adIkq127tq1YscKvMTdet/DSLbmGDBliderUsUsvvdQfWzbCLAAgbQizSILEhlnRAxBq1qzpQmbmQxPUdeDEE0+01atX+zUeTafp9b6495gVwiwAIG0Is0iCRIfZykSYBQCkDWEWSUCYjYkwCwBIG8IskoAwGxNhFgCQNoRZJAFhNibCLAAgbQizSALCbEyEWQBA2hBmkQR5DbN68pbuOqBbZRU6wiwAIG0Is0iCvIXZpk2bugcWKCAGYVb3ju3UqZN7XWgIswCAtCHMIgnyEmZfeeUVu+6666xPnz722GOP2ZgxY1z9hAkT3MMPChFhFgCQNoRZJEFewqweXNC1a1f3+s9//nNJmN22bZsdcMAB7nWhIcwCANKGMIskyEuYvfLKK13rrOgpXkGYHTp0qHsEbSEizAIA0oYwiyTIS5jVY2Qvuugimz59urVo0cKF2QEDBti1117rhgsRYRYAkDaEWSRB3i4Au/XWW90FYGeffbYde+yx7nWtWrX8sYWHMAsASBvCLJIgb2FWRowY4e5e0KZNGxs0aJBfW5gIswCAtCHMIgnyEmYbNWrkuhokCWEWAJA2hFkkQV7CbLNmzQizAAAUOMIskiAvYXb27NlWr14969y5sy1evNivLWyEWQBA2hBmkQR5CbNqldUFX1FFXRAKEWEWAJA2hFkkQV7CrB5fW1YpRIRZAEDaEGaRBHkJs0lEmAUApA1hFkmQtzBbVFTkuhvoaWB6vK2eBFaorbJCmAUApA1hFkmQlzA7fvx41z+2bt26dsMNN1jTpk2tQYMGri54zG2hIcwCANKGMIskyEuYveeeeyIv9FJLbf369f2hwkKYBQCkDWEWSZCXMKsgm61LgVpnCxFhFgCQNoRZJAEtszERZgEAaUOYRRLQZzYmwiwAIG0Is0iCvJ3TV6C97LLLXAttUHr37u2PLTyEWQBA2hBmkQSF2UG1ABFmAQBpQ5hFEuQlzA4YMMD1j82kuqj6QkCYBQCkDWEWSZCXMPvAAw/Yc8895w/t0rdvX9d3thARZgEAaUOYRRLkJcxyay4AAAofYRZJkJfkeMcdd1izZs38oV3atm3LrbkAACgQhFkkQV7C7OTJk6169erWvHlz10I7c+ZM+9Of/mQ1a9bk1lwAABQIwiySIG/n9NUKq0CrbgVBady4sT+28BBmAQBpQ5hFEuS1g+qOHTusqKjIPvzwQ9u8ebNfW5gIswCAtCHMIgkK4mqr5cuX29y5c/2hwkSYBQCkDWEWSVCpYVZ3MejYsaM/5Dn11FNLuhnUqVPHBg8e7I8pLIRZAEDaEGaRBJUaZg888EBbu3atP2Q2aNAg+9nPfmadO3d23Q3OOecca9KkiT+2sBBmAQBpQ5hFElRamJ03b55rfQ279NJL7fbbb/eHzLp06WJHHnmkP1RYCLMAgLQhzCIJKi3Mzpo1y4XZNWvWuOEFCxa44V69erlh0W26MgNvoSDMAkA67fzqG9vw2Re2dMN2m71yi01ZvMHenfupDZq10npNWWqvvLfI/jFqvv3t7TnWst+H1rzXDLul61T7badJduVLE+yyF8fbRS+8Z+e3G2tntRljZz472ho+846d/PdR1uCvI+2XTwy3eo8NsyMfHWo/ffjt3Yrq/q94XP1Ww+24J0fYiX8baac89Y6d0fpda/TcaDvn+TF2/j/es4vbj3Ofo8/rN32Fv+T7jjCLJKi05Kg7F9SoUcO6d+/uhtu3b2/HHXecex1QmOVxtgBybeP2L+zjVVts7Ly19p+py6z9Owvskf4f2m3d3rfLOoy3E4oDg4KEgsI9/5lpnccttvc/2ei/O702bf/Slm/cYR+v3mrTirfHe/PX2YjZa1yoe33acus5+RO3rV58d4E9P2KeC3ePvvmRPfT6LLu7OOD9/tVpdlOXKS7kXf/KZPf61m5TXf2dPae7ae7rM9NN/0hxKNR7n3hrtpvPM0M/tjbD59qzw+bak4Nm25/6f2T39/3A7nptut3e/X27sfNka/zPiXZJcajT96awp+B3THEIzAyIVb28NGah/43tO8IskqBSm0FbtWrlWl51IZj+7d27tz/Go/FRTwYrBIRZFILtX3xta7futMXrPrMPV2y2yYvW26R9KApz73z8qQ0vWm2DP1xlb32w0rXq9H1/mb02Zam9OvET6zJ+sf3rvUXuAKnQ127kfNcSpfCiIDN1yQabt2arfVq8XPmybedXtqI4ZBWt3LLbNpm4cL09URx8FHiufnminV4ccKIO/ntS1ML25wEfue3zUfF3kBTrtu20BZ9ucyF01JxP7Y3py61LcfBsN3KeC4cPvzGreDvNsN/9e4oLhRf84z0XCNUaGLUd0lbUeqrW1PPajnX7koK4AviDxcG71VtF1rY4vL9c/DfSY9In1n/GCve3of0v/PdWWWXV5h3+t77vCLNIgko/pz98+HAXWlesKH0aRPV9+/b1hwoLYRYBBUqdctQBQ6FSLXwzl23ap2CpgPiXNz+y+/t8YE1fnWY3vDLZnTI8+/kxdspTo+zox4dFHmALsei0qE6f6qB/TadJrtVNB3y1rik47U1RK9/jA4tcK6nC1hUdJ7hTrHsbtP7vL8Pc+6/71yQ3T7X6KbQr0Ie/l/EL1lmnsYusWXFo+dWz2YOwTu0qNOuHgIL9vvj8y69dC+jqLZ/bkvXF+9fqrfbB8k3u1HZ42cJFoempwXOsRXEg/UOPaa7lU6eedTr6qL+UPnW9t0X7ofZHtXz+pnj/1PbT6XSFOrWoqjVVLanPDPk48nusiPLCqPnuO+le/EOrT/GProHFP8D0Y2xM8Q8zbaMPiv8Wtc207bQN1Qq/o3ibYu8QZpEEhdlBtQARZpPns51f2Zrig9mitZ+5MKCDvlpL1Gqi1hO1NCokqVVFYeuO4hBwY+cprtVF/c8UJE975h3Xp61QwmTdPw9xAU4tZmo5UwuaAmM+i4LcvgTLiihBgFafRC2TArROVStA63tWyFTgW7h2m/sxsrfUAvze/HXW4Z0FrkvCScWfGbU8hViOfWKE66t5ecfxdnOXKe6Uvn5AqUVR4fC1yUvtzZkrXautQqFanfVjTS26hMH0IswiCQizMRFm80/h9JP122360o0ulOrCC/XNUxjVBRdqJVJr4PF/zW2oUqDUxRgKMmqt02de2mGcC5ZaBp1+VGtV0A9Qy6ZWKwXmoB+gltn1Axzi9QPU6Xu1DOr0vloHdfpfp4PnrtnqTp+rpS4pwqf81bKp9dEp+ZdGL4xsaYtX5ru+mIXUtSGwftsXbpn0PerHUEX00dSPJ/XbVbcItYKqH6j2L7XYK0Sr24T6i6rfqLoItB421/0tqM/qoFmrXOBWN5RlG7bblh3J2XdQeAizSALCbEyE2Yqlfp/qv6eLakbOWRPqvzd/txCjVtJTn37HBciog35Z5RePDnWtqmqNUj9HnXrVxSZqkVKo/PvgOe4KZIWk3lOXuYtYRs/1T1Uu3+SCpMKzWnc3FwcCXdEMAGlCmEUSEGZjIszuGbVWDflotQuMaq30+u+N2uf+e+oDqFYq9ZtUy5RapHRxkk4jj1uwzrUGqp8cAGDfEWaRBITZmAizZdMpzX+PX+KuhtYVv5khVH0adfo/aCVVv0adig9aSXXKPbOVVPdx1Kl2XcihU9cAgMpFmEUSEGZjIszuoquD1TVAV4ArlOp0fji4/qzlYLuwOLCqX6j6S2p6AKiqtn+13TZ+vtFWfbbKPtnyic3dMNdmrZ1lU1dPtYkrJ9q4FePs3WXv2shPRtrQJUPt7UVv25sL37Q35r9hfeb2sdfmvGbdZ3e3Lh91sX99+C97+YOXXdFr1Wlczzk9rffc3vb6vNdtwIIB9tbCt2zw4sE2fMlwG7V0lI1eNtp9jj5v5baV/pLtO8IskiDRYfbBBx+0WrVquYcxNG7c2K8t35IlS9x9bvfkaWNpDrNzVm1xF5boIia1rIaDa1B0hbTCra6E1oVaQJIs2bzEJq2a5IJCh5kdrOV7Le2hsQ/ZsCXDXDjB3nEhb+dGW/3Z6t1C3vtr3rfpn053r2evn+3qF2xa4KZZvnW5m37tjrXuvVu/2OrmE7bty222fsd6F9oWb15sczbMsZmfzrTJqybbmOVjXMBT2NP32WNOD+v8YWfrOLPjXhXtDzcOudEav9XYLh1wqZ33xnn2q//8yk557RQ7utvRBVle+fAVf0vtO8IskiCxYVYPV6hXr55NmzbNPRpXD2Jo2rSpP7ZsCr7BAxziSkuY3fr5V+4iKN2ySldO6zGKmcFVXQbUD1YXaE1YuJ4LoxJux1c7XGBQcFi3Y50LEiu2rXDBYuGmhS5oKHDMWjfLBRAFkTnr57ggoVBR6LZ8scU+3vCxjV0+1rVutZ3W1u4bfZ9dP/h6O+M/Z0SGgcxy8msn2+3Db7cXpr/gWtjWbPcey11V6Hv8dPunLhh+uO5Dm7J6ir2z9B0XCHt93MuFQYW6p6Y8ZX8e92e7d/S99vsRvy8V8s7sc2ZBh7xclQY9GthpvU+zs/qeZRf1u8iuGHiFXfv2tfa7ob+z24bfZn8Y8Qe7a9Rd9sd3/2j3j7nfWoxtYY+Me8T+MuEv1mpiK/v75L/bM1Oesefef87aTW/nhegZHdzrNu+3sdZTW7tpnpz0pD0+4XH3HTz83sP24JgH3b589zt3W7NRzdzn6PPUYltRCLNIgsSG2erVq1uHDh38IbOePXu6cKpwW5auXbvar3/9a8KsT7c3Uh9V3ZNTzwzPDK4qun+o7gKg+3WqDysqngLi8E+Gu9OHClydZnVyoeuvk/7qWgl1sNJB6reDfmuXDLjEHTRP6nlS5IE1H+WM3mfYxf0vthsG32B3jLzDHWh18NVBWS1jAxcOdC1mOu26N0WhSttDYUoHc81fB/E7R91ptw671ZoMbRJZFCailjezKIhpegWM9jPal7TKKUxou5/Y88RS71EQVoD456x/2viV411oLhTbvtjmfoyotVKnoPvO6+uWU9+JwpS22WUDLrPTe59ear2SUPR9aJ875/Vz3H531VtX2fVvX+/WS/ufwrb2Ee0rWufnpz1vL8580W0DBfNuRd3cfhmctu8/v/+u0/bFf4cK8tpf9b2qtVc/4orWF7nWY7Ucq9VYPwDTgDCLJEhkmF27dq0LopMnT/ZrPKrr1auXP1Ta6tWr7Sc/+YlNnz49tWFWt5zS7a900/R6Ea2uKroZv55T/8a05e6m6VWVDvhqiVQrpFog1fo449MZ7uA14pMRexQs1XIXddCl5L+o1Uwth2q1UiuYTsEOWjTIBRT1cYxLLdQKPwq8ClBRn3X+G+fb/aPvd8EpCMSVUdSK13hQYzu779mRy1VeUThs1KeRW69rBl3jQmHzd5pbi/dauNZAhcF/fvBPe3X2q9Zvfj93Gl/9M6eviQ55O7/O//1/UTEIs0iCRIZZhVEF0Q0bNvg1HtW1adPGHyrtlltusbvuusu9Li/Mvvfee3bMMceUlLp161q1atX8scmzfOMO+2PvGaWCq27wrttc6U4CurXVZ18UXn9Xnf5W4Jy3cZ4LmzqIqi+jDqo6uOogq1NxT0x8wh18dRDWwfi6t6+zKwde6Q7QOsirFeqEnidEHswruigcnNrrVNd6pJCgFiQFHS3Lb978jVsuhQa1Jt085GYXjnUK8k/j/uRakhROdPGHTvHqQhFdOKKQrT6GCt7qClBILUMKMVoufT86Da9l1kUrCnVaH51WVZi8Zdgt7vS0Ws/0o0AtaA+MecC1ounUqVrSFJ70nmenPutClE7tK7AphOr7ViujWnoVqNR6pu2iz1XXh0WbF7nuD/qRoh8rubRp5yZ30Y1ab9UCrNActS/ko2hZ1NqslnK1Hj86/lG3HfWdKMirf7C6XqhrAVAWwiySIJFhdubMmVnDbPv27f2h3fXt29cOO+ww277du5CgvDCreY8dO7akvPrqq65rQ9Js+OwL90z7cIB9+I1Z9p+py/b5GfJ7Qqdgl21d5lpxdLXtkMVD7D9z/+NaPRVadLBVuFEouPzNy11rZ67DgVpTdar43NfPdS2sV791tTv4xw6W6wozWCJ/Plr3kWvNz2w53ZMSnArvWtTVnQrXflfqVHhxkM88Fa4fOmodzbxYCtgXhFkkQSLD7KZNm1wQjepm0L9/f39od7pYTAE2XDS9/h09erQ/VXZJ62agOwroAq3/+4vXleDnjwx2j1NdtXmHP0XuqMVHB1+dllfrY1SQ3JOiFk61bKpFUy2ZatlTi55a8tSHUhem6OCvg74O9jrI6wKWD9Z+4JZlyeYlrrVuw+cb7LMvq263CQCoaIRZJEEiw6wonHbu3NkfMhs0aJAdfPDBtnTpUr9md5lBVqUqhtmvvv7WPeNfF20FLbG3d3/fPTo2F9QyqdOWal3Vlc1RYVRFFyupBVStn2r51MU76r+oU7S6j6LCry68UAhVAFX4JHgCQH4RZpEEiQ2z9913nwu0EydOtKKiImvYsKHdeeed/lhz9T/96U9txYoVfs3ugjAbV6GH2W+/NfdI19NCT9+66qUJNnPZJn+KfafT6TrFHlzhfUqv6FvwqIuAugyoC8GElRNy3ncRAJAbhFkkQWLDrLRs2dJq1qzpQmbmQxN0kZgCru5gEEVhVvemjauQw+w7H39q57cbWxJiz//He65uX3xb/N/8jfPd1du6l+Gv+/w6Mrjq3opNRzR1tzPShT9cUAIAVQdhFkmQ6DBbmQoxzKrVVa2vQYhVq2z/GStcK+2e+ubbb9zFK7r/olpVFVIzg6uu0L956M2uS4Eu4NIFXQCAqoswiyQgzMZUSGFW9369rdv7JSFW/WO7T9yzR25+8fUXNm3NNHfltC6oiroBvwKtgq0Crp4KpMALAEgPwiySgDAbUyGF2Uvaj3Mh9hePDrU2w+fa9i++9seUbenWpfaP6f+wm4bcVCq4quh+qLrnp+4KoHu6AgDSjTCLJCDMxlQoYXbr51+5IHvUX4a6e8jGoacc6RngmeFVN/HXvVT18AE9+hIAgDDCLJKAMBtToYTZEbPXuDDb+J8T/Zrs9FQmPU0pHGB1Oyw9PUtPSAIAoCyEWSQBYTamQgmzTwya7cJs2xHZuwHo8a+6SCt4gtYvu//ShVjuNAAA2BOEWSQBYTamQgmzF73wnguzExau92t20SNV1SdWdx1QiK3fvb49Mu4RW7Et+l67AACUhTCLJCDMxlQIYfazL76yn7UcbLUfGWw7v9p1ZwE9i/2fH/yz5CEGx3Q7xu4ffb97jCsAAHuLMIskIMzGVAhhNrO/7M6vd1rXoq52Ru8zSvrENhvVzD3sAACAfUWYRRIQZmMqhDD7pN9f9tnhs93ts8JP5dLjZYvWF/lTAgCw7wizSALCbEyFEGYv9u8ve/1bTUtC7PWDr7epq6f6UwAAUHEIs0gCwmxM+Q6z4f6yx/do4O5UMHb5WH8sAAAVjzCLJCDMxpTvMDtyjtdf9qJ/dnctspe/ebk/BgCA3CDMIgkIszHlO8z+7e05Lsze3O8pF2b/Oumv/hgAAHKDMIskIMzGlO8we4nfX/aaN29xYXbI4iH+GAAAcoMwiyQgzMaUzzC7q7/s23ZSz5NcmNWjagEAyCXCLJKAMBtTPsPsqDmf+v1le7kge0G/C/wxAADkDmEWSUCYjSmfYfbvg73+srf0a+PC7J/H/dkfAwBA7hBmkQSE2ZjyGWYv7eD1l73xrTtcmO0/v78/BgCA3CHMIgkIszHlK8yG7y/bsNdpLswu3brUHwsAQO4QZpEEhNmY8hVm3/nY6y978ctvuCB7Wu/T/DEAAOQWYRZJQJiNKV9h9im/v+xt/du7MPvAmAf8MQAA5BZhFklAmI0pX2H2sg7jvYu/3r7XhdleH/fyxwAAkFuEWSQBYTamfITZoL+sSqM+jVyYnbdxnj8WAIDcIswiCQizMeUjzI6eu9brL/vSQBdk9cAEAAAqC2EWSUCYjSkfYfbpIR+7MNu0fycXZu8adZc/BgCA3CPMIgkIszHlI8xe9qLXX7bpkIddmP33R//2xwAAkHuEWSQBYTamyg6z4f6yF/e/xIXZWWtn+WMBAMg9wiySgDAbU2WH2THzvP6yl3Qc7oJsgx4N7Otvv/bHAgCQe4RZJAFhNqbKDrPP+P1l7xzQ1YXZ24bf5o8BACTKlzvMdm4127HRbNunZltWmm1aarZhkdm6eWZrisxWzTJbMc1s2WSzT8abLRm392XTMv+D9x1hFklAmI2pssPsb/z+sncNe8yF2Zc+eMkfAyCnPltn9ukcL1DoNaoOhUoFSoVJBUmFSAVIfdcLRhX/j36Q2Yevm8141WzKv8wmvGA29lmzUU+YDXvEbNC9ZgPuMOv7O7Ne15q9+huzLuebdfqV2Ysnmf3jGLM2dc2ePtzs8YPyV95r46/wviPMIgkIszFVZpjd+dU3Jf1lrxx4lQuzU1ZP8ccC2CNffGa2cYnZ8qnFf8hvm03rVhxQnjMb0sLs9VvMul9q1vEUs2drRwcDBZQ3bvPCzaoP/JmmlIKgWhQVAtWCqABYNMALf5OKf3Ar+I34ixf6+t3uBb5uF3thr/3xXtD7+0+it3NVL3//Hy/ktv6Ztx3aHlW8b9X3tkvHk81ebuhtp1fO9gJy14v2vnzY1//C9h1hFklAmI2pMsPsWL+/7MUvjrL63evbL7v/0nZ+vdMfi0T7fIvZtjXF4eoTs7Ufm62cabZ0ktmiMWZzhxQHg/5mM18ze79LcTjoaPbe82bv/t1s+KNmgx8we/MuL1j95wazHld6B67O55r969dm/zzD7KVTzV480eyFY83aHW32/C/MnqtTfACtZfbUYWZ/K96How60e1qeqlE8/3rFn3m6Fwb73GT21h/NRrUyG9/ObHpxYJzzlnfKc/VHZptXeKFyXyhIqTVtxXSzhe94LWhTX/EClFrNBtzphafwQf3fF0Yvf1RR0FCoCL//X41KT6dtqHHvPGk2b1jxcm3yFzABtm8o3u/mei2R2temdDIb/bS3/QbeXRzum5j1vMrbbgpXClsKX5nbIMlF37PCpH6kqDVVAfLfF3itrNp/1Oqq1lcFcm0XtcpqH5vQ3vtBo+CufU+tuArz2pZq3VXA1/6pVl/tq2oFrgIIs0gCwmxMlRlmnxnq9Ze9a8BrrlX2hsHFwQUVa+e24oC1vDhofWi2+D2z2QO9FrtxxUFs9FNmIx/3AuTQh70QOegeL0j2/0NxmLy1+IB3sxcoX7vGC5UKdC78FIdKBcr2x3lBUiGyogIkZe/KX/+fF8oU0BTW9L0qmCiQfDLBCyBlUShZPNb7UaHvOOoz9ANCAUg/QhRq8kE/hqZ3934ADW3p/ehRQNP++NwR0cu9p+Xpmt5+rRD4ylnF+/1lxX8HN3rrPvjB4oD/1+K/obZe6Pugl/eDRttOYU8hWkFPP+iQGIRZJAFhNqbKDLOXd/T6y9474m8uzLadVnxwwC5ffW722driELLYC6MKJGoh++gNL5BO7OC1Nuk0cr/fFwfOxl7rZYcG2U8lV1ZRi6aWoe3/ecvz8mnFy3aOWbdLvOXsUxwMtMxv3e0t/8jHvHVRyJ78srd+s/7jhe/5w70grvXXKfSVM7xWUPX3XL/AO7WuwL51tdf38/PNXuuott++0sUsCia6eEUtpVoOtVR90NtrLVWgefdvxevwkNdiqsCjYBW1TeIWtSxXdmtwNksnev0SX708+rS5WjWD1t09LcEPIrWuKzg+89OK+UGkeekUtn6EKXjqR5srxfuX+oa+/2/v9LRCsVob1aVi/ULvTEIVaWXEniPMIgkIszFVVphVf9naj3j9Za9/+wYXZscuH+uPTTC1hG5dVRx+5nuhS4FD/RcVzKZ29sKaCz/FAe7NZl6oU1BQ3zH1Z1T4e+Z/ow/Se1MUDtoc6c1bLXa9r/M+d9ifvIO7gooCkoLx5H96y6iwNLOnt8wf9fMC5dzBXqhc+K63TsumeAFb66kgqdCtdUfVpkCvfaX39RW7n0aV4AeRgn2HE7xwrx9E6h4wsLnXgqxgqv1SgVRhFNhLhFkkAWE2psoKs+/N9/vLtn/X9ZVVn9ntX233xyaEbjEzravZ2/d5B9qoA/K+FrVW6WCuFiy1uva4wmupU1cAdQ3QAV3hQuFToVktaWqxVKAGKssXxX+7ahHfvt5rIVdrsfpLq+VcfabVgqx+02pZ1z6qH0R6rfqSH0TrctfCDJSDMIskIMzGVFlhtvWwuS7M/nHA665V9qq3rvLHFCgdkNVSqT56CpVRwVNFLaG6EEmnThVwdTpV/U3V/1Sniof/2WzMM8UB9EWvBVQtn/NHeAd4Hdh1ylyBAABQaQizSALCbEyVFWav6DjBhdkHRz7nwuxTU57yxxQAtSqpT6IujlKfRV0MkhlaVafuAeqLpyvO1bcSAJBIhFkkAWE2psoIs+H+srcMvdWF2eFLhvtjK5luLaOWUbWWqgVVraqZwbXVwd5FTOprqguTPp1t9u23/gwAAElHmEUSEGZjqoww+978dX5/2bHWoEcDF2Y379zsj80h9etTX73x//BuOaX7k2YGVxXd9FtX3ev+mrq4JEn31wQA7DHCLJKAMBtTZYTZZ/3+svcMGOiC7MX9L/bHVKCvvzBb/r53s3TdM1X3x2xVrXRwffJQs05nehdxzejht7p+488EAJAGhFkkAWE2psoIs1e95PWXfeSd9i7MPjbhMX9MBVg02ru/5BM/LB1cFWb15CPdD1T3CNVthhR6AQCpRphFEhBmY8p1mA33l71j5F0uzA5cONAfu5d0SyDdHUDP/g6HVz0NSU9C0lOQdHN0bogOAIhAmEUSEGZjynWYHbfA6y974Qvv2Uk9T3JhdsW2Ff7YPaSb9ut+q+GnBv21uvdkKd3UHwCAGAizSALCbEy5DrPPDff6y97/5lAXZH/d59f+mJi+2mk28zXvUZi7tcIe4z3JSncnAABgDxBmkQSE2ZhyHWavfnmiC7OPje7kwmyLsS38MeXQwwSGP2rWutauAKs+sK819u44wK2yAAB7iTCLJEh8mN2wYYOtXr3aHyrfkiVLbNasWf5QfLkMs0F/WXcng3fvd2G2z9w+/tgIuqvA3CFmPa707vUahNjWPzMb+ZjZpmX+hAAA7D3CLJIg0WH2wQcftP3228+Vxo0b+7XRWrVqZSeffHLJ9HXq1LG7777bH1u+XIbZ8X5/2Qv+8Z7rXqAwu2DTAn9sBF28FQRYFT1GdlYZ4RcAgL1AmEUSJDbMNmvWzOrVq2fTpk2zBQsWWKNGjaxp06b+2NIUZtu3b29FRUW2du1a69ixowu1qo8jl2H2+RHzXJh9aMA7Lsie1vs0f0yEndu8APvX/2f21h/NVn/kjwAAoGIRZpEEiQ2z1atXtw4dOvhDZj179nThVOE2rtq1a9vll1/uD5Utl2G28T+9/rJ/HdPNhdk/vlscUrMpGuCF2Vd/41cAAJAbhFkkQSLDrFpWFVwnT57s13hU16tXL3+ofAqn6qoQR67CbLi/7ENj/uTCbPfZ3f2xEfrd7oVZPcELAIAcIswiCRIZZqdPn+6Cqy7+ClNdmzZt/KGytW7d2vWhzZxHYOzYsXbUUUeVFLXiVqtWzR9bcSYsXO+C7Pn/eM8u6HeBC7NF64v8sRm++drsqcO8MLt1lV8JAEBuEGaRBIkMszNnzswaZtUvtjzDhg1z0w4aNMivKW3jxo02YcKEkqIWX3VtqGht/f6yDw8Y74Jsgx4N7BvdrSDK4ve8IPtyGX1qAQCoIIRZJEEiw+ymTZtcGI3qZtC/f39/KJou+NJ0ffv29WviyVU3g6C/7NNje7kw23RE9ovYbGhLL8yOfsqvAAAgdwizSILEXgCmOxl07tzZHzLXynrwwQfb0qVL/ZrS9jbISi7CbLi/7KPjnnBhttOsMvrCtjvaC7OrPvArAADIHcIskiCxYfa+++5zgXbixInudlsNGza0O++80x9rrmvAYYcdZsuXL3fDPXr0KAmyo0eP3q3EkYswO9HvL3te27F2+ZuXuzA7bU2WuzF8OtsLsm2O9CsAAMgtwiySILFhVlq2bGk1a9Z0ITPzoQkzZsxwf4Rr1qxxwxqve9FGlThyEWbbjfT6y7Z8c7ILsr/s/kv7+tuv/bEZxj7nhdm37/crAADILcIskiDRYbYy5SLMXtNpkguzbcb1d2H25iE3+2Mi/OvXXphd+I5fAQBAbhFmkQSE2ZgqOsyG+8v+bWJrF2ZfmP6CPzbD9g1ekP1b8efr9lwAAFQCwiySgDAbU0WH2SmLN7gge27bsXbt29e6MDtuxTh/bIb3/+2F2b6/8ysAAMg9wiySgDAbU0WH2b7vL3Nh9k8DZlj97vVd2f7Vdn9shp5Xe2F2Vh+/AgCA3CPMIgkIszHlos/sji+/tqELx7pW2WsGXePXZvhyh9mTh5o98UOznVv9SgAAco8wiyQgzMaUizArHWZ2cGG29dTWfk2GOW95rbLdLvYrAACoHIRZJAFhNqZchdkmQ5u4MDtq6Si/JkP/P3hhdtJLfgUAAJWDMIskIMzGlIswq3vKNujRwIXZzTs3+7Uh335r9tRhXpjdlP3JZgAA5AJhFklAmI0pF2F2xqczXJC9bMBlfk2GT8Z7QfalU/0KAAAqD2EWSUCYjSkXYfaVD19xYbbVxFZ+TYbhf/bC7DtP+hUAAFQewiySgDAbUy7C7B0j73Bh9u1Fb/s1Gdod7YXZFdP9CgAAKg9hFklAmI2posPsN99+Yyf1PMmF2bU71vq1IWs/9oJs61p+BQAAlYswiyQgzMZU0WF2zvo5Lsie98Z5fk2GcW29MDvoHr8CAIDKRZhFEhBmY6roMPvanNdcmH1k3CN+TYZXzvbC7PzhfgUAAJWLMIskIMzGVNFhdufXO23iyon24boP/ZqQ7RvMWh1s9rfiz/v6C78SAIDKRZhFEhBmY8rFBWBZTe/mtcr+50a/AgCAykeYRRIQZmOq1DDb67demP2gl18BAEDlI8wiCQizMVVamP1yh9mTh3rdDHZu9SsBAKh8hFkkAWE2pkoLsx8P8lpl/32hXwEAQH4QZpEEhNmYKi3MvtnMC7MT2vsVAADkB2EWSUCYjalSwuy333oPSVCY3bTUrwQAID8Is0gCwmxMlRJml07yguyLJ/oVAADkD2EWSUCYjalSwuyIv3hhduTjfgUAAPlDmEUSEGZjqpQw2+5oL8wum+JXAACQP4RZJAFhNqach9l1870gqz6z6jsLAECeEWaRBITZmHIeZsf/wwuzA5v7FQAA5BdhFklAmI0p52G2y3lemJ07xK8AACC/CLNIAsJsTDkNs9s3eE/8+lvx/L/+wq8EACC/CLNIAsJsTDkNszN6eK2yva/zKwAAyD/CLJKAMBtTTsOsQqzC7IxX/QoAAPKPMIskIMzGlLMwq24F6l6gbgbqbgAAQIEgzCIJCLMx5SzM6oIvtcp2PtevAACgMBBmkQSE2ZhyFmYH3u2F2fHt/AoAAAoDYRZJQJiNKSdhVg9H0EMSFGbXzfMrAQAoDIRZJAFhNqachNnlU70gq8fYAgBQYAizSALCbEw5CbMjH/fC7PBH/QoAAAoHYRZJQJiNKSdh9sUTvTC7dKJfAQBA4SDMIgkIszFVeJjdtNQLsuozq76zAAAUGMIskoAwG1OFh9mJL3phdsCdfgUAAIWFMIskIMzGVOFh9ssdZvOHm63+0K8AAKCwEGaRBITZmHLSZxYAgAJGmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBIkPs2vXrrUVK1b4Q/HMmjXLduzY4Q/FQ5gFAKQNYRZJkOgw27JlS9tvv/1cady4sV+bXVFRkZ144olu+h/84AfWqlUrf0z5CLMAgLQhzCIJEhtm7733XqtXr55NmzbNFixYYI0aNbKmTZv6Y6OdcsopLvSuXLnSJk+ebAceeKB16tTJH1s2wiwAIG0Is0iCxIbZGjVqWIcOHfwhs549e7oWV4XbKAqjGj9z5ky/xuzuu++2Bg0a+ENlI8wCANKGMIskSGSYVT9ZBVO1roaprm/fvv7Q7lT//e9/3x/yjB492r1n9erVfk12hFkAQNoQZpEEiQyz06dPdyF0w4YNfo1Hde3atfOHdte2bVurX7++P+QJwuyMGTP8ml007ogjjigphx12mP33f//3bnX7Wv7nf/7HDj/88MhxFG/7qESNo3hF26dWrVqR4yjsQ3GKts/Pf/7zyHEU9iE1Aj355JP+kREoTIkMs+oqkC3Mtm/f3h/anUJutjD7wQcf+DW7bNq0yaZMmVJSRo0aZS+99NJudftaFJC7d+8eOY4yxR555BE7++yzI8dRvKJ+3/37948cR5lid911l11++eWR4yhe+a//+i8bOXJk5DjKFPvd735nN998c+S4NJQ+ffrYkiVL/CMjUJgSGWYVNBVCo7oZ6MAeZcCAAVm7Gaxbt86vqVxqDcnWxxfmLs6Lc5eKNPvhD39oixYt8oeQ6Zlnnin3wtC0U5jNbBjALrprzsMPP+wPAShEib0ATHcyyLwA7OCDD7alS5f6Nbtbvny5C66ZF4Cdc845/lDlI8yWjTBbPsJs2Qiz5SPMlo0wCxS+xIbZ++67zwVa9Z8Nbs115513+mPNxo8f7y7YUogNnHrqqS4crVq1quTWXF26dPHHVj7CbNkIs+UjzJaNMFs+wmzZCLNA4UtsmJVu3bq5W2sdffTRpTqoqwX2rLPOsjVr1vg15v6HrXCkAKBxr732mj8mP3r37k2YLYO2jwqy0/YhzGbHPlQ+bR/CbHbsQ0DhS3SYBQAAQLoRZgEAAJBYhFkAAAAkFmEWAAAAiUWYzRNdHXvppZda8+bNbdKkSX5t8umCu+HDh9tzzz1nrVq18mtLe/DBB+3CCy+02267zaZOnerX7qLHD99000120UUXRc5n8+bN7o4WuovFHXfcYUVFRf6YXXSBny74u/LKK8tclsrWr18/t95XXXWVde3a1davX++P2aW8ZY+z/q+++qo1adLErrvuush56D333HOPm4f+jZpHPmj7aJlUtJ9of8qke023aNGizGXXOmvd92X9y5tHvmnf0XJFLVt5y14R6x9nHvmgZY0qYZW1/uXNA8C+I8zmgUKKguygQYPc/9wOOOAAGzx4sD822bQ+derUcffw1X19o2j9VYL11y3Shg4d6o81V6/36mluetiFbsGmA0FAQUZ3sdA89OALzUPT6JZrAdVpHrq9l+5BrPGqyzctgw58nTt3doH9kksuseOOO862b9/uT1H+su/N+teoUWO3eejqdb0ncx4rV670p8gfbZ8HHnjAffd6cp/uPPL000/7Y71lP/bYY8tc9opY//LmUQhuvPFGt71Uwipj/VesWOHeo7rwPFSfb9oeWp7MEgiWvbz11zpr3aPWX9OWt/7heWg7anuG5wGgYhBmK1kQ1MKtkZdddpndfvvt/lDVoP+5az0zaf0PPfRQf8ij9Vd4CSjot27d2h/y7hmsea1evdoNK+REzSN8kFDYy5yHDirhwJcPeixy2Nq1a926qRU1UN6yx1l/HVTD8+jYsaObR3Cw1sE1cx76EVKIB1rd51PLHoha9iBUBDLXXz+M9nT9NY/w47Ez55FvujXhKaecEhlm46y/1jcsvP47duyw/fffv9T6q07jRC3jUfNQfb4FYTabbMu+J+uvactaf21rbfPMeei7AVCxCLOVTKeGM586pv8pHn/88f5Q1ZAtzCq0RrUiBeuvFiO9T+8PU93AgQPd6+D0epjmcfLJJ7vXW7ZsyTqPN9980x8qHLVr1y55eEecZd+b9Vdo/t73vudO4Uu2eZx44on+UOF4/PHH3Q+cQHnLXtY2jLv+wTwyheeRT2r900Nh9MRDrUd4XeKuv9Y3LLz+em+29Q/mq8+Mmkfmds2HYNm0rFEtxdmWfU/WX9OWtf7a1tnmoe8IQMUp/ZeGnFI/yVtvvdUf8qiVpHr16v5Q1ZDtYHDttdeWaoUOr7/6nOl9mQcgtXiodVF0ajBqHj/5yU/c67Lm8fLLL/tDhUEtNVrW4OEZcZZ9b9e/bt26JY+AjpqHDsQKSIVAy6KiftX6oTNy5Eh/TPSyq+9xsOzZ1l91cdc/mEem8Dzy6frrr3f9iUXhKRwg466/1jcsvP5xwly2QBhelnzRMvzv//6vHXPMMW6ZNTxkyBB/bPZl35P117Rlrb+2dbZ56DsCUHFK/6Uhpy644AJ32jRM/5P97ne/6w9VDdkOBlr/zNOQ4fXXxXB6X3AqL6BWx6Df5Pnnnx85D7U8ysSJE8udRyEItlH4gBhn2fd2/XVKuqx5aDmCeeSblkVF+4tCyb///W9/TPSyq891eeuvurjrH8wjU3ge+aJW/F/84hf+UOkwG3f9w/udhNc/TpjLFgjDy5IvEyZM8F9566KWfXXfCWRb9j1Zf01b1vprW2ebh74jABWn9F8acuoPf/iDa50MU+ucTjVXJdkOBlp/XaEfFl5/nTbV+2bNmuWGA4cccoj16NHDvdadAKLmERzgly9fnnUeasEsBNu2bSvVz1PiLPverr9absuah5YlHJIKxTPPPOMuEvz888/dcNSyqxWsvPVXXdz1D+aRKTyPfPnpT3/q/r6CovCkEoSsuOsfFcSC9c/296u64HOyBcIgzBUSLVd4fbIt+56sv6Yta/21rbPNQ98RgIpT+i8NOaX/2dWqVcs+++wzv8asWbNmBXkA2BfZDgZa/yOPPNIf8qjbRbD+X3zxhbtoQqeNA0uWLNntIJJtHuF+lZo+ah6ZF2DlQ3AaeNiwYX7N7spb9rjrr7slBILvIzyPzAtRdOo5PI9CEaz/jBkz3HCcZc9cf3Xj2NP11/RB9w/JnEe+6G8lWwnEWX+tb1h4/YP9JWr9g79DfV7UPMLLUSjatGnj/r8SyLbse7L+mras9de2zjYPABWLv6pKFlzhqvuwioKNDqqF1pdzXwUHg0y6Iv+www4rOdBGrb8OtNdcc40/5A2rv2dg5syZbt7BPGbPnu3moTslBHR/1fCBRvMIB5V80TJr2XVwzaa8Zd+b9dc9e8uaR/B9heeRL+Fl0AWBWhf9AAxuXxZn2Sti/cubR6FQeMoMkBWx/urfnzkP1QXUj1nv0XslmEe4f3M+aDmCZRL9GPrtb3/r+l8HgmUvb/21zoHM9de05a2/tnl4Htqe+m4AVCzCbB7of6D6n179+vXtoIMOclcWVxUKXlq3zBI+uATrrz5s1apVi1x//U9foV/3E1WQzTxAvvTSS24eOojrFLQ+N0zdFS6++GIXgvR+hb1CuOhCyxveLkEJL3+cZY+7/jVr1nRdOKLmEXxXmofudZw5j3zRMukHT7Ct1Aod3MkiEF7/qGUPfiTty/rHmUch0PKrhFXW+us9em+2eeRDECr196P/x+j1mWeeWerhG3HXX+u+t+sfnoe2Y9Q8AOw7wmyeqMVp8uTJrtWgKtGBJFsJi7P+CxYscPfjDXfJCNPtpjTfzKu2w3TgUN/BzIth8iVzm4RLpvKWvSLWX+8tbx75EN4u69at82t3V976a5217vuy/nHmkW/BdspUWetf3jwqm/pWf/zxxyXbpazlqoz1jzMPAPuGMAsAAIDEIswCAAAgsQizAAAASCzCLAAAABKLMAsAAIDEIswCAAAgsQizAAAASCzCLFCFBPfWXLRokV/jCepzSfPPvHl/PrVv395uvPFGt0y5XncAQP4QZoEqRE8gOuOMM+zqq6/2azwKc3raUS5VxmfEpRvna1n02GAt156GWQXgzKc5AQAKE2EWqEIUwBTEFOSGDRvm16YvzGpZqlevbs2aNdvjICuEWQBIDsIsUIUEYfaee+6xk046ya8tHTSjwlq4Lpi+X79+Jc+3P+2009y4J554wurWrWs1atSwp59+2tVJ8J6+ffvaL3/5S/f6yiuvLPWYz1deecXq16/vxp966qnWqVMnf8yu5de/RxxxhHudTYsWLdzz7n/84x/v9jl6r+YdlGzz0PJefvnlVq1aNTednpsvme9XCeg9WmbVaR1at27tj9m1/s8995x7Fv9BBx1Uav0ff/xx+9WvflUy37LWDwAQD2EWqEKCMLhy5Ur7/ve/by+++KKrD4JWIAiMYeG6YHoN63VRUZGdd955blin7pcsWWIjRoxwnzFt2rTd3nPCCSfYyJEj3bDmee6557rx0rhxY7viiitsyJAhblhBVu8JhoPlv+6662zx4sXuc6IoyGo6fcakSZPsmmuu2e1zgs8ui8a/+uqrtmXLFjes9wQ0LnP7dO7c2S2rwrro38MPP7xkOFj/YLmCZdA6BzR+zJgxtnXrVjfctWtX9y8AYO8RZoEqJAiDwetatWrZzp07S4JWICqsheuC6SdPnuyG5e6777ZDDjnEduzY4deYa50MWlaD97zxxhtuWBRSVTdo0CDbsGGDe92xY0d/rCf8ufpX06xfv94NR9m4caObplevXn6NudbP4HMkCJJl0XiF2TVr1vg1u0Rtn4YNG5aqu/baa10ruATr//rrr7th0TKqTsusHxh6remCMAsA2HeEWaAKUdgKhzh1BXjsscdKglYgKqyF6zKnl8x5S9R7FNzC6tSp41pzp0yZ4sZHlWAeUZ+RSS3Beo/CcZiCtT5HtCzlzUefdcopp7h5KagHrcMSXq9AeHnDJficqPUPAnzQet28eXO3PfS93H///TZu3DhXDwDYe4RZoArJDIPqZrD//vvHCrPHH398Sd2+hNnp06e7YQnCXPfu3d3twvRap9mzifqMTMF8wq3G8qMf/ch9jsQJs4HBgwfb7373OzfPiRMnurqo7aN+wuovnE3U+msZVZd5q7TevXvb7bff7ra5Ws4BAHuPMAtUIVFhUBc23XHHHS5UBdRXUxd2Bd5//303XWYwDYuadzj0Be95/vnn3bDoXq8K0zNmzHDD+ox7773XvQ4Lwl7UZ0TRxWhBK6wohIY/p7wwu3DhQv/VLpo+6LqguyA0bdrUvQ40adIkcp5Bt4uo9dcyBheWZQZa0fTz5s3zhwAAe4MwC1QhUWGwf//+LjSFw+mcOXPsF7/4hbuoS8FNp+j1vooIs7rgSfUqGg7Gi0LnD37wA3dFv06za5ym13sl6jOiDBw40PXf1bTnn39+qc8pL8xq/GWXXebeEyzD6aef7o/1Lu7SPBVgw/PVXRqOO+44+/3vf+/qte2C8cH6n3nmmZHrr/H6HA0/9NBDdsEFF9g555zjxgEA9h5hFqhCFJSC8BQWVa+LkNQSqQuydLeC8DQKXpnTR80j23t69uzpXof7oQbUkqnT7LqtV+Y04fmVZ9asWW7Z1fo7fPhwv9YTtfyZdBeE4PN0T97grgYB1T377LOl5qMfB7oll+p1m7FAEGbV0qp6vTd8r1/Ruup9arHV9FwIBgD7jjALABUgCLMAgMrF/3kBoAIQZgEgP/g/LwBUAIVZFQBA5SLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgos/8PAKXZKAZp7rQAAAAASUVORK5CYII=">
## Tokenizer
Le tokenizer de départ est [BarthezTokenizer](https://huggingface.co/transformers/model_doc/barthez.html) auquel ont été rajouté les tokens spéciaux \<sep\> et \<hl\>.
## Utilisation
_Le modèle est un POC, nous garantissons pas ses performances_
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import Text2TextGenerationPipeline
model_name = 'lincoln/barthez-squadFR-fquad-piaf-question-generation'
loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
nlp = Text2TextGenerationPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("Les projecteurs peuvent être utilisées pour <hl>illuminer<hl> des terrains de jeu extérieurs")
# >>> [{'generated_text': 'À quoi servent les projecteurs sur les terrains de jeu extérieurs?'}]
```
```py
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import Text2TextGenerationPipeline
model_name = 'lincoln/barthez-squadFR-fquad-piaf-question-generation'
loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "Les Etats signataires de la convention sur la diversité biologique des Nations unies doivent parvenir, lors de la COP15, qui s’ouvre <hl>lundi<hl>, à un nouvel accord mondial pour enrayer la destruction du vivant au cours de la prochaine décennie."
inputs = loaded_tokenizer(text, return_tensors='pt')
out = loaded_model.generate(
input_ids=inputs.input_ids,
attention_mask=inputs.attention_mask,
num_beams=16,
num_return_sequences=16,
length_penalty=10
)
questions = []
for question in out:
questions.append(loaded_tokenizer.decode(question, skip_special_tokens=True))
for q in questions:
print(q)
# Quand se tient la conférence des Nations Unies sur la diversité biologique?
# Quand a lieu la conférence des Nations Unies sur la diversité biologique?
# Quand se tient la conférence sur la diversité biologique des Nations unies?
# Quand se tient la conférence de la diversité biologique des Nations unies?
# Quand a lieu la conférence sur la diversité biologique des Nations unies?
# Quand a lieu la conférence de la diversité biologique des Nations unies?
# Quand se tient la conférence des Nations unies sur la diversité biologique?
# Quand a lieu la conférence des Nations unies sur la diversité biologique?
# Quand se tient la conférence sur la diversité biologique des Nations Unies?
# Quand se tient la conférence des Nations Unies sur la diversité biologique?
# Quand se tient la conférence de la diversité biologique des Nations Unies?
# Quand la COP15 a-t-elle lieu?
# Quand la COP15 a-t-elle lieu?
# Quand se tient la conférence sur la diversité biologique?
# Quand s'ouvre la COP15,?
# Quand s'ouvre la COP15?
```
## Citation
Model based on:
paper: https://arxiv.org/abs/2010.12321 \
github: https://github.com/moussaKam/BARThez
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
|
lincoln/camembert-squadFR-fquad-piaf-answer-extraction
|
lincoln
| 2021-10-11T15:01:04Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"camembert",
"token-classification",
"answer extraction",
"fr",
"dataset:squadFR",
"dataset:fquad",
"dataset:piaf",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
language:
- fr
license: mit
datasets:
- squadFR
- fquad
- piaf
tags:
- camembert
- answer extraction
---
# Extraction de réponse
Ce modèle est _fine tuné_ à partir du modèle [camembert-base](https://huggingface.co/camembert-base) pour la tâche de classification de tokens.
L'objectif est d'identifier les suites de tokens probables qui pourrait être l'objet d'une question.
## Données d'apprentissage
La base d'entrainement est la concatenation des bases SquadFR, [fquad](https://huggingface.co/datasets/fquad), [piaf](https://huggingface.co/datasets/piaf).
Les réponses de chaque contexte ont été labelisées avec le label "ANS".
Volumétrie (nombre de contexte):
* train: 24 652
* test: 1 370
* valid: 1 370
## Entrainement
L'apprentissage s'est effectué sur une carte Tesla K80.
* Batch size: 16
* Weight decay: 0.01
* Learning rate: 2x10-5 (décroit linéairement)
* Paramètres par défaut de la classe [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments)
* Total steps: 1 000
Le modèle semble sur apprendre au delà :

## Critiques
Le modèle n'a pas de bonnes performances et doit être corrigé après prédiction pour être cohérent. La tâche de classification n'est pas évidente car le modèle doit identifier des groupes de token _sachant_ qu'une question peut être posée.

## Utilisation
_Le modèle est un POC, nous garantissons pas ses performances_
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
import numpy as np
model_name = "lincoln/camembert-squadFR-fquad-piaf-answer-extraction"
loaded_tokenizer = AutoTokenizer.from_pretrained(model_path)
loaded_model = AutoModelForTokenClassification.from_pretrained(model_path)
text = "La science des données est un domaine interdisciplinaire qui utilise des méthodes, des processus,\
des algorithmes et des systèmes scientifiques pour extraire des connaissances et des idées de nombreuses données structurelles et non structurées.\
Elle est souvent associée aux données massives et à l'analyse des données."
inputs = loaded_tokenizer(text, return_tensors="pt", return_offsets_mapping=True)
outputs = loaded_model(inputs.input_ids).logits
probs = 1 / (1 + np.exp(-outputs.detach().numpy()))
probs[:, :, 1][0] = np.convolve(probs[:, :, 1][0], np.ones(2), 'same') / 2
sentences = loaded_tokenizer.tokenize(text, add_special_tokens=False)
prob_answer_tokens = probs[:, 1:-1, 1].flatten().tolist()
offset_start_mapping = inputs.offset_mapping[:, 1:-1, 0].flatten().tolist()
offset_end_mapping = inputs.offset_mapping[:, 1:-1, 1].flatten().tolist()
threshold = 0.4
entities = []
for ix, (token, prob_ans, offset_start, offset_end) in enumerate(zip(sentences, prob_answer_tokens, offset_start_mapping, offset_end_mapping)):
entities.append({
'entity': 'ANS' if prob_ans > threshold else 'O',
'score': prob_ans,
'index': ix,
'word': token,
'start': offset_start,
'end': offset_end
})
for p in entities:
print(p)
# {'entity': 'O', 'score': 0.3118681311607361, 'index': 0, 'word': '▁La', 'start': 0, 'end': 2}
# {'entity': 'O', 'score': 0.37866950035095215, 'index': 1, 'word': '▁science', 'start': 3, 'end': 10}
# {'entity': 'ANS', 'score': 0.45018652081489563, 'index': 2, 'word': '▁des', 'start': 11, 'end': 14}
# {'entity': 'ANS', 'score': 0.4615934491157532, 'index': 3, 'word': '▁données', 'start': 15, 'end': 22}
# {'entity': 'O', 'score': 0.35033443570137024, 'index': 4, 'word': '▁est', 'start': 23, 'end': 26}
# {'entity': 'O', 'score': 0.24779987335205078, 'index': 5, 'word': '▁un', 'start': 27, 'end': 29}
# {'entity': 'O', 'score': 0.27084410190582275, 'index': 6, 'word': '▁domaine', 'start': 30, 'end': 37}
# {'entity': 'O', 'score': 0.3259460926055908, 'index': 7, 'word': '▁in', 'start': 38, 'end': 40}
# {'entity': 'O', 'score': 0.371802419424057, 'index': 8, 'word': 'terdisciplinaire', 'start': 40, 'end': 56}
# {'entity': 'O', 'score': 0.3140853941440582, 'index': 9, 'word': '▁qui', 'start': 57, 'end': 60}
# {'entity': 'O', 'score': 0.2629334330558777, 'index': 10, 'word': '▁utilise', 'start': 61, 'end': 68}
# {'entity': 'O', 'score': 0.2968383729457855, 'index': 11, 'word': '▁des', 'start': 69, 'end': 72}
# {'entity': 'O', 'score': 0.33898216485977173, 'index': 12, 'word': '▁méthodes', 'start': 73, 'end': 81}
# {'entity': 'O', 'score': 0.3776060938835144, 'index': 13, 'word': ',', 'start': 81, 'end': 82}
# {'entity': 'O', 'score': 0.3710060119628906, 'index': 14, 'word': '▁des', 'start': 83, 'end': 86}
# {'entity': 'O', 'score': 0.35908180475234985, 'index': 15, 'word': '▁processus', 'start': 87, 'end': 96}
# {'entity': 'O', 'score': 0.3890596628189087, 'index': 16, 'word': ',', 'start': 96, 'end': 97}
# {'entity': 'O', 'score': 0.38341325521469116, 'index': 17, 'word': '▁des', 'start': 101, 'end': 104}
# {'entity': 'O', 'score': 0.3743852376937866, 'index': 18, 'word': '▁', 'start': 105, 'end': 106}
# {'entity': 'O', 'score': 0.3943936228752136, 'index': 19, 'word': 'algorithme', 'start': 105, 'end': 115}
# {'entity': 'O', 'score': 0.39456743001937866, 'index': 20, 'word': 's', 'start': 115, 'end': 116}
# {'entity': 'O', 'score': 0.3846966624259949, 'index': 21, 'word': '▁et', 'start': 117, 'end': 119}
# {'entity': 'O', 'score': 0.367380827665329, 'index': 22, 'word': '▁des', 'start': 120, 'end': 123}
# {'entity': 'O', 'score': 0.3652925491333008, 'index': 23, 'word': '▁systèmes', 'start': 124, 'end': 132}
# {'entity': 'O', 'score': 0.3975735306739807, 'index': 24, 'word': '▁scientifiques', 'start': 133, 'end': 146}
# {'entity': 'O', 'score': 0.36417365074157715, 'index': 25, 'word': '▁pour', 'start': 147, 'end': 151}
# {'entity': 'O', 'score': 0.32438698410987854, 'index': 26, 'word': '▁extraire', 'start': 152, 'end': 160}
# {'entity': 'O', 'score': 0.3416857123374939, 'index': 27, 'word': '▁des', 'start': 161, 'end': 164}
# {'entity': 'O', 'score': 0.3674810230731964, 'index': 28, 'word': '▁connaissances', 'start': 165, 'end': 178}
# {'entity': 'O', 'score': 0.38362061977386475, 'index': 29, 'word': '▁et', 'start': 179, 'end': 181}
# {'entity': 'O', 'score': 0.364640474319458, 'index': 30, 'word': '▁des', 'start': 182, 'end': 185}
# {'entity': 'O', 'score': 0.36050117015838623, 'index': 31, 'word': '▁idées', 'start': 186, 'end': 191}
# {'entity': 'O', 'score': 0.3768993020057678, 'index': 32, 'word': '▁de', 'start': 192, 'end': 194}
# {'entity': 'O', 'score': 0.39184248447418213, 'index': 33, 'word': '▁nombreuses', 'start': 195, 'end': 205}
# {'entity': 'ANS', 'score': 0.4091200828552246, 'index': 34, 'word': '▁données', 'start': 206, 'end': 213}
# {'entity': 'ANS', 'score': 0.41234123706817627, 'index': 35, 'word': '▁structurelle', 'start': 214, 'end': 226}
# {'entity': 'ANS', 'score': 0.40243157744407654, 'index': 36, 'word': 's', 'start': 226, 'end': 227}
# {'entity': 'ANS', 'score': 0.4007353186607361, 'index': 37, 'word': '▁et', 'start': 228, 'end': 230}
# {'entity': 'ANS', 'score': 0.40597623586654663, 'index': 38, 'word': '▁non', 'start': 231, 'end': 234}
# {'entity': 'ANS', 'score': 0.40272021293640137, 'index': 39, 'word': '▁structurée', 'start': 235, 'end': 245}
# {'entity': 'O', 'score': 0.392631471157074, 'index': 40, 'word': 's', 'start': 245, 'end': 246}
# {'entity': 'O', 'score': 0.34266412258148193, 'index': 41, 'word': '.', 'start': 246, 'end': 247}
# {'entity': 'O', 'score': 0.26178646087646484, 'index': 42, 'word': '▁Elle', 'start': 255, 'end': 259}
# {'entity': 'O', 'score': 0.2265639454126358, 'index': 43, 'word': '▁est', 'start': 260, 'end': 263}
# {'entity': 'O', 'score': 0.22844195365905762, 'index': 44, 'word': '▁souvent', 'start': 264, 'end': 271}
# {'entity': 'O', 'score': 0.2475772500038147, 'index': 45, 'word': '▁associée', 'start': 272, 'end': 280}
# {'entity': 'O', 'score': 0.3002186715602875, 'index': 46, 'word': '▁aux', 'start': 281, 'end': 284}
# {'entity': 'O', 'score': 0.3875720798969269, 'index': 47, 'word': '▁données', 'start': 285, 'end': 292}
# {'entity': 'ANS', 'score': 0.445063054561615, 'index': 48, 'word': '▁massive', 'start': 293, 'end': 300}
# {'entity': 'ANS', 'score': 0.4419114589691162, 'index': 49, 'word': 's', 'start': 300, 'end': 301}
# {'entity': 'ANS', 'score': 0.4240635633468628, 'index': 50, 'word': '▁et', 'start': 302, 'end': 304}
# {'entity': 'O', 'score': 0.3900952935218811, 'index': 51, 'word': '▁à', 'start': 305, 'end': 306}
# {'entity': 'O', 'score': 0.3784807324409485, 'index': 52, 'word': '▁l', 'start': 307, 'end': 308}
# {'entity': 'O', 'score': 0.3459452986717224, 'index': 53, 'word': "'", 'start': 308, 'end': 309}
# {'entity': 'O', 'score': 0.37636008858680725, 'index': 54, 'word': 'analyse', 'start': 309, 'end': 316}
# {'entity': 'ANS', 'score': 0.4475618302822113, 'index': 55, 'word': '▁des', 'start': 317, 'end': 320}
# {'entity': 'ANS', 'score': 0.43845775723457336, 'index': 56, 'word': '▁données', 'start': 321, 'end': 328}
# {'entity': 'O', 'score': 0.3761221170425415, 'index': 57, 'word': '.', 'start': 328, 'end': 329}
```
|
chrommium/sbert_large-finetuned-sent_in_news_sents_3lab
|
chrommium
| 2021-10-11T13:29:58Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sbert_large-finetuned-sent_in_news_sents_3lab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sbert_large-finetuned-sent_in_news_sents_3lab
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9443
- Accuracy: 0.8580
- F1: 0.6199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 17
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 264 | 0.6137 | 0.8608 | 0.3084 |
| 0.524 | 2.0 | 528 | 0.6563 | 0.8722 | 0.4861 |
| 0.524 | 3.0 | 792 | 0.7110 | 0.8494 | 0.4687 |
| 0.2225 | 4.0 | 1056 | 0.7323 | 0.8608 | 0.6015 |
| 0.2225 | 5.0 | 1320 | 0.9604 | 0.8551 | 0.6185 |
| 0.1037 | 6.0 | 1584 | 0.8801 | 0.8523 | 0.5535 |
| 0.1037 | 7.0 | 1848 | 0.9443 | 0.8580 | 0.6199 |
| 0.0479 | 8.0 | 2112 | 1.0048 | 0.8608 | 0.6168 |
| 0.0479 | 9.0 | 2376 | 0.9757 | 0.8551 | 0.6097 |
| 0.0353 | 10.0 | 2640 | 1.0743 | 0.8580 | 0.6071 |
| 0.0353 | 11.0 | 2904 | 1.1216 | 0.8580 | 0.6011 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
juliensimon/autonlp-imdb-demo-hf-16622775
|
juliensimon
| 2021-10-11T12:46:02Z
| 6
| 1
|
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:juliensimon/autonlp-data-imdb-demo-hf",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- juliensimon/autonlp-data-imdb-demo-hf
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 16622775
## Validation Metrics
- Loss: 0.18653589487075806
- Accuracy: 0.9408
- Precision: 0.9537643207855974
- Recall: 0.9272076372315036
- AUC: 0.985847396174344
- F1: 0.9402985074626865
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-imdb-demo-hf-16622775
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622775", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622775", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
mse30/bart-base-finetuned-arxiv
|
mse30
| 2021-10-11T11:22:28Z
| 8
| 2
|
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:scientific_papers",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: bart-base-finetuned-arxiv
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scientific_papers
type: scientific_papers
args: arxiv
metrics:
- name: Rouge1
type: rouge
value: 13.6917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-arxiv
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2912
- Rouge1: 13.6917
- Rouge2: 5.9564
- Rougel: 11.1734
- Rougelsum: 12.6817
- Gen Len: 19.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6027 | 1.0 | 6345 | 2.4504 | 13.3687 | 5.603 | 10.8671 | 12.3297 | 20.0 |
| 2.4807 | 2.0 | 12690 | 2.3561 | 13.6207 | 5.855 | 11.1073 | 12.594 | 20.0 |
| 2.4041 | 3.0 | 19035 | 2.3035 | 13.6222 | 5.8863 | 11.1173 | 12.5984 | 20.0 |
| 2.3716 | 4.0 | 25380 | 2.2912 | 13.6917 | 5.9564 | 11.1734 | 12.6817 | 19.9992 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
GKLMIP/bert-myanmar-base-uncased
|
GKLMIP
| 2021-10-11T04:58:59Z
| 28
| 1
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z
|
The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Huang, Xiuwen
and Cai, Xiaonan
and Lin, Nankai",
title="Pre-trained Models and Evaluation Data for the Myanmar Language",
booktitle="The 28th International Conference on Neural Information Processing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
```
|
GKLMIP/electra-myanmar-base-uncased
|
GKLMIP
| 2021-10-11T04:58:43Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z
|
The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Huang, Xiuwen
and Cai, Xiaonan
and Lin, Nankai",
title="Pre-trained Models and Evaluation Data for the Myanmar Language",
booktitle="The 28th International Conference on Neural Information Processing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
```
|
suwani/BERT_NER_Ep5_PAD_75-finetuned-ner
|
suwani
| 2021-10-11T04:05:50Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERT_NER_Ep5_PAD_75-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_NER_Ep5_PAD_75-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3504
- Precision: 0.6469
- Recall: 0.7246
- F1: 0.6835
- Accuracy: 0.9013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 288 | 0.3695 | 0.5799 | 0.6200 | 0.5993 | 0.8792 |
| 0.4695 | 2.0 | 576 | 0.3443 | 0.5823 | 0.7252 | 0.6460 | 0.8862 |
| 0.4695 | 3.0 | 864 | 0.3189 | 0.6407 | 0.7030 | 0.6704 | 0.8978 |
| 0.2184 | 4.0 | 1152 | 0.3458 | 0.6383 | 0.7335 | 0.6826 | 0.8980 |
| 0.2184 | 5.0 | 1440 | 0.3504 | 0.6469 | 0.7246 | 0.6835 | 0.9013 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
bsingh/roberta_goEmotion
|
bsingh
| 2021-10-11T00:26:09Z
| 992
| 3
|
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"emotions",
"en",
"dataset:go_emotions",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
language: en
tags:
- text-classification
- pytorch
- roberta
- emotions
datasets:
- go_emotions
license: mit
widget:
- text: "I am not feeling well today."
---
## This model is trained for GoEmotions dataset which contains labeled 58k Reddit comments with 28 emotions
- admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral
## Training details:
- The training script is provided here: https://github.com/bsinghpratap/roberta_train_goEmotion
- Please feel free to start an issue in the repo if you have trouble running the model and I would try to respond as soon as possible.
- The model works well on most of the emotions except: 'desire', 'disgust', 'embarrassment', 'excitement', 'fear', 'grief', 'nervousness', 'pride', 'relief', 'remorse', 'surprise']
- I'll try to fine-tune the model further and update here if RoBERTa achieves a better performance.
- Each text datapoint can have more than 1 label. Most of the training set had 1 label: Counter({1: 36308, 2: 6541, 3: 532, 4: 28, 5: 1}). So currently I just used the first label for each of the datapoint. Not ideal but it does a decent job.
## Model Performance
============================================================<br>
Emotion: admiration<br>
============================================================<br>
GoEmotions Paper: 0.65<br>
RoBERTa: 0.62<br>
Support: 504<br>
============================================================<br>
Emotion: amusement<br>
============================================================<br>
GoEmotions Paper: 0.80<br>
RoBERTa: 0.78<br>
Support: 252<br>
============================================================<br>
Emotion: anger<br>
============================================================<br>
GoEmotions Paper: 0.47<br>
RoBERTa: 0.44<br>
Support: 197<br>
============================================================<br>
Emotion: annoyance<br>
============================================================<br>
GoEmotions Paper: 0.34<br>
RoBERTa: 0.22<br>
Support: 286<br>
============================================================<br>
Emotion: approval<br>
============================================================<br>
GoEmotions Paper: 0.36<br>
RoBERTa: 0.31<br>
Support: 318<br>
============================================================<br>
Emotion: caring<br>
============================================================<br>
GoEmotions Paper: 0.39<br>
RoBERTa: 0.24<br>
Support: 114<br>
============================================================<br>
Emotion: confusion<br>
============================================================<br>
GoEmotions Paper: 0.37<br>
RoBERTa: 0.29<br>
Support: 139<br>
============================================================<br>
Emotion: curiosity<br>
============================================================<br>
GoEmotions Paper: 0.54<br>
RoBERTa: 0.48<br>
Support: 233<br>
============================================================<br>
Emotion: disappointment<br>
============================================================<br>
GoEmotions Paper: 0.28<br>
RoBERTa: 0.18<br>
Support: 127<br>
============================================================<br>
Emotion: disapproval<br>
============================================================<br>
GoEmotions Paper: 0.39<br>
RoBERTa: 0.26<br>
Support: 220<br>
============================================================<br>
Emotion: gratitude<br>
============================================================<br>
GoEmotions Paper: 0.86<br>
RoBERTa: 0.84<br>
Support: 288<br>
============================================================<br>
Emotion: joy<br>
============================================================<br>
GoEmotions Paper: 0.51<br>
RoBERTa: 0.47<br>
Support: 116<br>
============================================================<br>
Emotion: love<br>
============================================================<br>
GoEmotions Paper: 0.78<br>
RoBERTa: 0.68<br>
Support: 169<br>
============================================================<br>
Emotion: neutral<br>
============================================================<br>
GoEmotions Paper: 0.68<br>
RoBERTa: 0.61<br>
Support: 1606<br>
============================================================<br>
Emotion: optimism<br>
============================================================<br>
GoEmotions Paper: 0.51<br>
RoBERTa: 0.52<br>
Support: 120<br>
============================================================<br>
Emotion: realization<br>
============================================================<br>
GoEmotions Paper: 0.21<br>
RoBERTa: 0.15<br>
Support: 109<br>
============================================================<br>
Emotion: sadness<br>
============================================================<br>
GoEmotions Paper: 0.49<br>
RoBERTa: 0.42<br>
Support: 108
|
Lazaro97/results
|
Lazaro97
| 2021-10-10T21:48:18Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.8404
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3793
- Accuracy: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3542 | 1.0 | 125 | 0.3611 | 0.839 |
| 0.2255 | 2.0 | 250 | 0.3793 | 0.8404 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Fiddi/distilbert-base-uncased-finetuned-ner
|
Fiddi
| 2021-10-10T20:08:19Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9290544285555925
- name: Recall
type: recall
value: 0.9375769101689228
- name: F1
type: f1
value: 0.9332962138084633
- name: Accuracy
type: accuracy
value: 0.9841136193940935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Precision: 0.9291
- Recall: 0.9376
- F1: 0.9333
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2412 | 1.0 | 878 | 0.0688 | 0.9178 | 0.9246 | 0.9212 | 0.9815 |
| 0.0514 | 2.0 | 1756 | 0.0608 | 0.9251 | 0.9344 | 0.9298 | 0.9832 |
| 0.0304 | 3.0 | 2634 | 0.0604 | 0.9291 | 0.9376 | 0.9333 | 0.9841 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/fnet-large-finetuned-cola-copy4
|
gchhablani
| 2021-10-10T19:30:36Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola-copy4
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola-copy4
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6500
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6345 | 1.0 | 2138 | 0.6611 | 0.0 |
| 0.6359 | 2.0 | 4276 | 0.6840 | 0.0 |
| 0.6331 | 3.0 | 6414 | 0.6500 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
S34NtheGuy/DialoGPT-small-wetterlettuce
|
S34NtheGuy
| 2021-10-10T17:59:38Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z
|
---
tags:
- conversational
---
# DialoGPT chat bot model using discord messages as data
|
mamlong34/t5_small_cosmos_qa
|
mamlong34
| 2021-10-10T15:37:59Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cosmos_qa",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cosmos_qa
metrics:
- accuracy
model-index:
- name: t5_small_cosmos_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_small_cosmos_qa
This model is a fine-tuned version of [mamlong34/t5_small_race_mutlirc](https://huggingface.co/mamlong34/t5_small_race_mutlirc) on the cosmos_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5614
- Accuracy: 0.6067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4811 | 1.0 | 3158 | 0.5445 | 0.5548 |
| 0.4428 | 2.0 | 6316 | 0.5302 | 0.5836 |
| 0.3805 | 3.0 | 9474 | 0.5614 | 0.6067 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
JonatanGk/roberta-base-ca-finetuned-cyberbullying-catalan
|
JonatanGk
| 2021-10-10T09:50:17Z
| 5
| 1
|
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"catalan",
"ca",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z
|
---
language: ca
tags:
- "catalan"
metrics:
- accuracy
widget:
- text: "Ets més petita que un barrufet!!"
- text: "Ets tan lletja que et donaven de menjar per sota la porta."
---
# roberta-base-ca-finetuned-cyberbullying-catalan
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the dataset generated scrapping all social networks (Twitter, Youtube ...) to detect cyberbullying on Catalan.
It achieves the following results on the evaluation set:
- Loss: 0.1508
- Accuracy: 0.9665
## Training and evaluation data
I use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 410k sentences. Trained similar method at [roberta-base-bne-finetuned-cyberbullying-spanish](https://huggingface.co/JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish)
## Training procedure
<details>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-ca-finetuned-ciberbullying-catalan"
bullying_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
bullying_analysis(
"Des que et vaig veure m'en vaig enamorar de tu."
)
# Output:
[{'label': 'Not_bullying', 'score': 0.9996786117553711}]
bullying_analysis(
"Ets tan lletja que et donaven de menjar per sota la porta."
)
# Output:
[{'label': 'Bullying', 'score': 0.9927878975868225}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Cyberbullying_detection_(CATALAN).ipynb)
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/)
|
MaryaAI/opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en
|
MaryaAI
| 2021-10-10T06:33:20Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:syssr_en_ar",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- syssr_en_ar
metrics:
- bleu
model-index:
- name: opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: syssr_en_ar
type: syssr_en_ar
args: default
metrics:
- name: Bleu
type: bleu
value: 7.9946
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the syssr_en_ar dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2046
- Bleu: 7.9946
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 1 | 1.2038 | 7.9946 | 20.0 |
| No log | 2.0 | 2 | 1.2038 | 7.9946 | 20.0 |
| No log | 3.0 | 3 | 1.2038 | 7.9946 | 20.0 |
| No log | 4.0 | 4 | 1.2036 | 7.9946 | 20.0 |
| No log | 5.0 | 5 | 1.2046 | 7.9946 | 20.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
staceythompson/autonlp-myclassification-fortext-16332728
|
staceythompson
| 2021-10-10T00:24:34Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"unk",
"dataset:staceythompson/autonlp-data-myclassification-fortext",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- staceythompson/autonlp-data-myclassification-fortext
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 16332728
## Validation Metrics
- Loss: 0.08077391237020493
- Accuracy: 0.9846153846153847
- Macro F1: 0.9900793650793651
- Micro F1: 0.9846153846153847
- Weighted F1: 0.9846153846153847
- Macro Precision: 0.9900793650793651
- Micro Precision: 0.9846153846153847
- Weighted Precision: 0.9846153846153847
- Macro Recall: 0.9900793650793651
- Micro Recall: 0.9846153846153847
- Weighted Recall: 0.9846153846153847
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/staceythompson/autonlp-myclassification-fortext-16332728
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("staceythompson/autonlp-myclassification-fortext-16332728", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("staceythompson/autonlp-myclassification-fortext-16332728", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
gchhablani/fnet-large-finetuned-cola
|
gchhablani
| 2021-10-09T14:36:27Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6243
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6195 | 1.0 | 2138 | 0.6527 | 0.0 |
| 0.6168 | 2.0 | 4276 | 0.6259 | 0.0 |
| 0.616 | 3.0 | 6414 | 0.6243 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/fnet-large-finetuned-qqp
|
gchhablani
| 2021-10-09T08:56:52Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: fnet-large-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8943111550828593
- name: F1
type: f1
value: 0.8556565212985171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-qqp
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5515
- Accuracy: 0.8943
- F1: 0.8557
- Combined Score: 0.8750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:|
| 0.4574 | 1.0 | 90962 | 0.4946 | 0.8694 | 0.8297 | 0.8496 |
| 0.3387 | 2.0 | 181924 | 0.4745 | 0.8874 | 0.8437 | 0.8655 |
| 0.2029 | 3.0 | 272886 | 0.5515 | 0.8943 | 0.8557 | 0.8750 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingtweets/adhd_93
|
huggingtweets
| 2021-10-09T01:14:07Z
| 7
| 1
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/adhd_93/1633742043558/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442325298138255362/h2ntdCgO_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LGBTDHD</div>
<div style="text-align: center; font-size: 14px;">@adhd_93</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LGBTDHD.
| Data | LGBTDHD |
| --- | --- |
| Tweets downloaded | 3236 |
| Retweets | 296 |
| Short tweets | 153 |
| Tweets kept | 2787 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2o8cqxfu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @adhd_93's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/227a55pn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/227a55pn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/adhd_93')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingartists/the-notorious-big
|
huggingartists
| 2021-10-08T17:26:01Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/the-notorious-big",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/the-notorious-big
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/664976b54a605d6ac0df2415a8ccac16.564x564x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Notorious B.I.G.</div>
<a href="https://genius.com/artists/the-notorious-big">
<div style="text-align: center; font-size: 14px;">@the-notorious-big</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from The Notorious B.I.G..
Dataset is available [here](https://huggingface.co/datasets/huggingartists/the-notorious-big).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-notorious-big")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/wkvasju4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on The Notorious B.I.G.'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1coezuy2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1coezuy2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/the-notorious-big')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/the-notorious-big")
model = AutoModelWithLMHead.from_pretrained("huggingartists/the-notorious-big")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/yung-lean
|
huggingartists
| 2021-10-08T15:22:16Z
| 4
| 1
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/yung-lean",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/yung-lean
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/8c898f8c39dbd271b3ccfd5303d423c7.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Yung Lean</div>
<a href="https://genius.com/artists/yung-lean">
<div style="text-align: center; font-size: 14px;">@yung-lean</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Yung Lean.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/yung-lean).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/yung-lean")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3mtv3swy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Yung Lean's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1qh8r5pu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1qh8r5pu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/yung-lean')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/yung-lean")
model = AutoModelWithLMHead.from_pretrained("huggingartists/yung-lean")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
svanhvit/XLMR-ENIS-finetuned-conll_ner
|
svanhvit
| 2021-10-08T15:14:21Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:agpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLMR-ENIS-finetuned-conll_ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8754622097322882
- name: Recall
type: recall
value: 0.8425622775800712
- name: F1
type: f1
value: 0.8586972290729725
- name: Accuracy
type: accuracy
value: 0.9860744627305035
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-conll_ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0713
- Precision: 0.8755
- Recall: 0.8426
- F1: 0.8587
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0493 | 1.0 | 2904 | 0.0673 | 0.8588 | 0.8114 | 0.8344 | 0.9841 |
| 0.0277 | 2.0 | 5808 | 0.0620 | 0.8735 | 0.8275 | 0.8499 | 0.9855 |
| 0.0159 | 3.0 | 8712 | 0.0713 | 0.8755 | 0.8426 | 0.8587 | 0.9861 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-large-repro-960h-libri-120k-steps
|
patrickvonplaten
| 2021-10-08T14:12:07Z
| 2
| 0
|
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z
|
https://wandb.ai/patrickvonplaten/pretraining-wav2vec2/reports/Wav2Vec2-Large--VmlldzoxMTAwODM4?accessToken=wm3qzcnldrwsa31tkvf2pdmilw3f63d4twtffs86ou016xjbyilh55uoi3mo1qzc
|
Ajaykannan6/autonlp-manthan-16122692
|
Ajaykannan6
| 2021-10-08T13:52:19Z
| 9
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autonlp",
"unk",
"dataset:Ajaykannan6/autonlp-data-manthan",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z
|
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Ajaykannan6/autonlp-data-manthan
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 16122692
## Validation Metrics
- Loss: 1.1877621412277222
- Rouge1: 42.0713
- Rouge2: 23.3043
- RougeL: 37.3755
- RougeLsum: 37.8961
- Gen Len: 60.7117
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Ajaykannan6/autonlp-manthan-16122692
```
|
svanhvit/XLMR-ENIS-finetuned-ner-finetuned-conll_ner
|
svanhvit
| 2021-10-08T13:38:38Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:agpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLMR-ENIS-finetuned-ner-finetuned-conll_ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8720365189221028
- name: Recall
type: recall
value: 0.8429893238434164
- name: F1
type: f1
value: 0.8572669368847712
- name: Accuracy
type: accuracy
value: 0.9857922913838598
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner-finetuned-conll_ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS-finetuned-ner](https://huggingface.co/vesteinn/XLMR-ENIS-finetuned-ner) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0770
- Precision: 0.8720
- Recall: 0.8430
- F1: 0.8573
- Accuracy: 0.9858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0461 | 1.0 | 2904 | 0.0647 | 0.8588 | 0.8107 | 0.8341 | 0.9842 |
| 0.0244 | 2.0 | 5808 | 0.0704 | 0.8691 | 0.8296 | 0.8489 | 0.9849 |
| 0.0132 | 3.0 | 8712 | 0.0770 | 0.8720 | 0.8430 | 0.8573 | 0.9858 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
nateraw/timm-resnet50-beans-copy
|
nateraw
| 2021-10-08T03:16:00Z
| 6
| 0
|
timm
|
[
"timm",
"pytorch",
"image-classification",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z
|
---
tags:
- timm
- image-classification
library_name: timm
---
|
raynardj/roberta-pubmed
|
raynardj
| 2021-10-08T02:58:27Z
| 8
| 2
|
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"pubmed",
"cancer",
"gene",
"clinical trial",
"bioinformatic",
"en",
"dataset:pubmed",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
---
language:
- en
tags:
- pubmed
- cancer
- gene
- clinical trial
- bioinformatic
license: apache-2.0
datasets:
- pubmed
widget:
- text: "The <mask> effects of hyperatomarin"
---
# Roberta-Base fine-tuned on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) Abstract
> We limit the training textual data to the following [MeSH](https://www.ncbi.nlm.nih.gov/mesh/)
* All the child MeSH of ```Biomarkers, Tumor(D014408)```, including things like ```Carcinoembryonic Antigen(D002272)```
* All the child MeSH of ```Carcinoma(D002277)```, including things like all kinds of carcinoma: like ```Carcinoma, Lewis Lung(D018827)``` etc. around 80 kinds of carcinoma
* All the child MeSH of ```Clinical Trial(D016439)```
* The training text file amounts to 531Mb
## Training
* Trained on language modeling task, with ```mlm_probability=0.15```, on 2 Tesla V100 32G
```python
training_args = TrainingArguments(
output_dir=config.save, #select model path for checkpoint
overwrite_output_dir=True,
num_train_epochs=3,
per_device_train_batch_size=30,
per_device_eval_batch_size=60,
evaluation_strategy= 'steps',
save_total_limit=2,
eval_steps=250,
metric_for_best_model='eval_loss',
greater_is_better=False,
load_best_model_at_end =True,
prediction_loss_only=True,
report_to = "none")
```
|
joonhan/roberta-roa
|
joonhan
| 2021-10-08T02:05:28Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
* Fine-tunning "KLUE/roberta-large" model For CER(Company Entity Recognition) With Custom Dataset
* Custom Datasets are composed of news data
```python
label_list = ['O',"B-PER","I-PER","B-ORG","I-ORG","B-COM","I-COM","B-LOC","I-LOC","B-DAT","I-DAT","B-TIM","I-TIM","B-QNT","I-QNT"]
refer_list = ['0','1','2','3','4','5','6','7','8','9','10','11','12','13','14']
```
- EX: "B-PER" : 1 , "B-COM" : 5
|
gchhablani/fnet-large-finetuned-stsb
|
gchhablani
| 2021-10-07T17:02:23Z
| 6
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: fnet-large-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8532669137129205
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-stsb
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6250
- Pearson: 0.8554
- Spearmanr: 0.8533
- Combined Score: 0.8543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 1.0727 | 1.0 | 1438 | 0.7718 | 0.8187 | 0.8240 | 0.8214 |
| 0.4619 | 2.0 | 2876 | 0.7704 | 0.8472 | 0.8500 | 0.8486 |
| 0.2401 | 3.0 | 4314 | 0.6250 | 0.8554 | 0.8533 | 0.8543 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
hiiamsid/est5-base-qg
|
hiiamsid
| 2021-10-07T09:26:49Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"spanish",
"question generation",
"qg",
"es",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z
|
---
language: ["es"]
tags:
- spanish
- question generation
- qg
Datasets:
- SQUAD
license: mit
---
This is the finetuned model of hiiamsid/est5-base for Question Generation task.
* Here input is the context only and output is questions. No information regarding answers were given to model.
* Unfortunately, due to lack of sufficient resources it is fine tuned with batch_size=10 and num_seq_len=256. So, if too large context is given model may not get information about last portions.
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
MODEL_NAME = 'hiiamsid/est5-base-qg'
model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME)
tokenizer = T5Tokenizer.from_pretrained(MODEL_NAME)
model.cuda();
model.eval();
def generate_question(text, beams=10, grams=2, num_return_seq=10,max_size=256):
x = tokenizer(text, return_tensors='pt', padding=True).to(model.device)
out = model.generate(**x, no_repeat_ngram_size=grams, num_beams=beams, num_return_sequences=num_return_seq, max_length=max_size)
return tokenizer.decode(out[0], skip_special_tokens=True)
print(generate_question('Any context in spanish from which question is to be generated'))
```
## Citing & Authors
- Datasets : [squad_es](https://huggingface.co/datasets/squad_es)
- Model : [hiiamsid/est5-base](hiiamsid/est5-base)
|
huggingartists/bryan-adams
|
huggingartists
| 2021-10-07T08:16:16Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/bryan-adams",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/bryan-adams
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/2cb27a7f3f50142f45cd18fae968738c.750x750x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bryan Adams</div>
<a href="https://genius.com/artists/bryan-adams">
<div style="text-align: center; font-size: 14px;">@bryan-adams</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Bryan Adams.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/bryan-adams).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/bryan-adams")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/22ksbpsz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Bryan Adams's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3b0c22fu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3b0c22fu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/bryan-adams')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/bryan-adams")
model = AutoModelWithLMHead.from_pretrained("huggingartists/bryan-adams")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
minwhoo/bart-base-negative-claim-generation
|
minwhoo
| 2021-10-07T04:24:44Z
| 16
| 5
|
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:wikifactcheck",
"arxiv:2109.15107",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z
|
---
language:
- en
tags:
- text2text-generation
license: mit
datasets:
- wikifactcheck
widget:
- text: "Little Miss Sunshine was filmed over 30 days."
---
# BART base negative claim generation model
This is a BART-based model fine-tuned for negative claim generation. This model is used in the data augmentation process described in the paper [CrossAug: A Contrastive Data Augmentation Method for Debiasing Fact Verification Models](https://arxiv.org/abs/2109.15107). The model has been fine-tuned using the parallel and opposing claims from WikiFactCheck-English dataset.
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = 'minwhoo/bart-base-negative-claim-generation'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
model.to('cuda' if torch.cuda.is_available() else 'cpu')
examples = [
"Little Miss Sunshine was filmed over 30 days.",
"Magic Johnson did not play for the Lakers.",
"Claire Danes is wedded to an actor from England."
]
batch = tokenizer(examples, max_length=1024, padding=True, truncation=True, return_tensors="pt")
out = model.generate(batch['input_ids'].to(model.device), num_beams=5)
negative_examples = tokenizer.batch_decode(out, skip_special_tokens=True)
print(negative_examples)
# ['Little Miss Sunshine was filmed less than 3 days.', 'Magic Johnson played for the Lakers.', 'Claire Danes is married to an actor from France.']
```
## Citation
```
@inproceedings{lee2021crossaug,
title={CrossAug: A Contrastive Data Augmentation Method for Debiasing Fact Verification Models},
author={Minwoo Lee and Seungpil Won and Juae Kim and Hwanhee Lee and Cheoneum Park and Kyomin Jung},
booktitle={Proceedings of the 30th ACM International Conference on Information & Knowledge Management},
publisher={Association for Computing Machinery},
series={CIKM '21},
year={2021}
}
```
|
arjun3816/autonlp-sam_summarization1-15492651
|
arjun3816
| 2021-10-07T02:28:05Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"unk",
"dataset:arjun3816/autonlp-data-sam_summarization1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z
|
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- arjun3816/autonlp-data-sam_summarization1
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 15492651
## Validation Metrics
- Loss: 1.4060134887695312
- Rouge1: 50.9953
- Rouge2: 35.9204
- RougeL: 43.5673
- RougeLsum: 46.445
- Gen Len: 58.0193
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/arjun3816/autonlp-sam_summarization1-15492651
```
|
risingodegua/hate-speech-detector
|
risingodegua
| 2021-10-06T16:52:38Z
| 4
| 2
|
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
language: en
tag: text-classification
datasets:
- twitter
- movies subtitles
---
# Hate Speech Detector
This model is a fork of the [bert-based-uncased-hatespeech-movies](https://huggingface.co/uhhlt/bert-based-uncased-hatespeech-movies) model. It is used to classify text as **normal**, **offensive**, **hatespeech**. The model is initially a pre-trained transformer model(bert-based-uncased) which is further trained on Twitter comments which can be normal, offensive and hate to learn the context from social media data. It is then fine-tuned using the movie subtitles dataset.
## Test it out
You can test this model live on [Spaces](https://huggingface.co/spaces/risingodegua/hate-speech-detector)
|
huggingtweets/beth_kindig-elonmusk-iofundofficial
|
huggingtweets
| 2021-10-06T03:14:09Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442634650703237120/mXIcYtIs_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1441096557944737802/y56EUiiU_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1431003324157812739/QYyroq6k_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Beth Kindig & I/O Fund Official</div>
<div style="text-align: center; font-size: 14px;">@beth_kindig-elonmusk-iofundofficial</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Beth Kindig & I/O Fund Official.
| Data | Elon Musk | Beth Kindig | I/O Fund Official |
| --- | --- | --- | --- |
| Tweets downloaded | 2400 | 3247 | 1935 |
| Retweets | 127 | 484 | 143 |
| Short tweets | 642 | 273 | 8 |
| Tweets kept | 1631 | 2490 | 1784 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pyiqrq2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @beth_kindig-elonmusk-iofundofficial's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3anxlpvl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3anxlpvl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/beth_kindig-elonmusk-iofundofficial')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bergurth/XLMR-ENIS-finetuned-ner
|
bergurth
| 2021-10-05T21:52:34Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:agpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: Bónus feðgarnir Jóhannes Jónsson og Jón Ásgeir Jóhannesson opnuðu fyrstu Bónusbúðina í 400 fermetra húsnæði við Skútuvog laugardaginn 8. apríl 1989
model-index:
- name: XLMR-ENIS-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.861851332398317
- name: Recall
type: recall
value: 0.8384309266628767
- name: F1
type: f1
value: 0.849979828251974
- name: Accuracy
type: accuracy
value: 0.9830620929487668
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0938
- Precision: 0.8619
- Recall: 0.8384
- F1: 0.8500
- Accuracy: 0.9831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0574 | 1.0 | 2904 | 0.0983 | 0.8374 | 0.8061 | 0.8215 | 0.9795 |
| 0.0321 | 2.0 | 5808 | 0.0991 | 0.8525 | 0.8235 | 0.8378 | 0.9811 |
| 0.0179 | 3.0 | 8712 | 0.0938 | 0.8619 | 0.8384 | 0.8500 | 0.9831 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
ueb1/IceBERT-finetuned-ner
|
ueb1
| 2021-10-05T21:28:47Z
| 7
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:gpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: gpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: IceBERT-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8926985693142575
- name: Recall
type: recall
value: 0.8648584060222249
- name: F1
type: f1
value: 0.8785579899253504
- name: Accuracy
type: accuracy
value: 0.985303647287535
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0799
- Precision: 0.8927
- Recall: 0.8649
- F1: 0.8786
- Accuracy: 0.9853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0528 | 1.0 | 2904 | 0.0774 | 0.8784 | 0.8529 | 0.8655 | 0.9829 |
| 0.0258 | 2.0 | 5808 | 0.0742 | 0.8769 | 0.8705 | 0.8737 | 0.9843 |
| 0.0166 | 3.0 | 8712 | 0.0799 | 0.8927 | 0.8649 | 0.8786 | 0.9853 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
prajjwal1/bert-tiny-mnli
|
prajjwal1
| 2021-10-05T18:00:12Z
| 104
| 2
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1908.08962",
"arxiv:2110.01518",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI.
If you use the model, please consider citing the paper
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
```
MNLI: 60%
MNLI-mm: 61.61%
```
These models were trained for 4 epochs.
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
prajjwal1/bert-small-mnli
|
prajjwal1
| 2021-10-05T17:57:54Z
| 88
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1908.08962",
"arxiv:2110.01518",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI.
If you use the model, please consider citing the paper
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
```
MNLI: 72.1%
MNLI-mm: 73.76%
```
These models were trained for 4 epochs.
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
prajjwal1/bert-medium-mnli
|
prajjwal1
| 2021-10-05T17:56:07Z
| 26,266
| 1
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1908.08962",
"arxiv:2110.01518",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI.
If you use the model, please consider citing the paper
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
```
MNLI: 75.86%
MNLI-mm: 77.03%
```
These models are trained for 4 epochs.
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
prajjwal1/albert-base-v1-mnli
|
prajjwal1
| 2021-10-05T17:54:14Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"arxiv:2110.01518",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
If you use the model, please consider citing this paper
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
thorduragust/IceBERT-finetuned-ner
|
thorduragust
| 2021-10-05T16:36:22Z
| 7
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:gpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: gpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: IceBERT-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8948412698412699
- name: Recall
type: recall
value: 0.86222965706775
- name: F1
type: f1
value: 0.878232824195217
- name: Accuracy
type: accuracy
value: 0.9851596438314519
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0787
- Precision: 0.8948
- Recall: 0.8622
- F1: 0.8782
- Accuracy: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0526 | 1.0 | 2904 | 0.0746 | 0.8802 | 0.8539 | 0.8668 | 0.9836 |
| 0.0264 | 2.0 | 5808 | 0.0711 | 0.8777 | 0.8594 | 0.8684 | 0.9843 |
| 0.0161 | 3.0 | 8712 | 0.0787 | 0.8948 | 0.8622 | 0.8782 | 0.9852 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
thorduragust/XLMR-ENIS-finetuned-ner
|
thorduragust
| 2021-10-05T15:40:05Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:agpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLMR-ENIS-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8707943925233644
- name: Recall
type: recall
value: 0.8475270039795338
- name: F1
type: f1
value: 0.8590031691155287
- name: Accuracy
type: accuracy
value: 0.982856184128243
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0916
- Precision: 0.8708
- Recall: 0.8475
- F1: 0.8590
- Accuracy: 0.9829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0581 | 1.0 | 2904 | 0.1055 | 0.8477 | 0.8057 | 0.8262 | 0.9791 |
| 0.0316 | 2.0 | 5808 | 0.0902 | 0.8574 | 0.8349 | 0.8460 | 0.9813 |
| 0.0201 | 3.0 | 8712 | 0.0916 | 0.8708 | 0.8475 | 0.8590 | 0.9829 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
eliasbe/XLMR-ENIS-finetuned-ner
|
eliasbe
| 2021-10-05T14:03:47Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:agpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLMR-ENIS-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.9002453676283949
- name: Recall
type: recall
value: 0.896
- name: F1
type: f1
value: 0.8981176669198953
- name: Accuracy
type: accuracy
value: 0.9843747637694087
widget:
- text: systurnar guðrún og monique voru einar í skóginum umkringdar víði, eik og reyni með þá ósk að sameinast fjölskyldu sinni sem fór á mai thai og í bíó paradís að sjá jim carey leika í the eternal sunshine of the spotless mind.
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0827
- Precision: 0.9002
- Recall: 0.896
- F1: 0.8981
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0567 | 1.0 | 2904 | 0.1081 | 0.8486 | 0.8140 | 0.8309 | 0.9796 |
| 0.0302 | 2.0 | 5808 | 0.0906 | 0.8620 | 0.8298 | 0.8456 | 0.9818 |
| 0.0197 | 3.0 | 8712 | 0.0948 | 0.8691 | 0.8447 | 0.8567 | 0.9826 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
eliasbe/IceBERT-finetuned-ner
|
eliasbe
| 2021-10-05T12:35:51Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: gpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
model-index:
- name: IceBERT-finetuned-ner
widget:
- text: systurnar guðrún og monique voru einar í skóginum umkringdar víði, eik og reyni með þá ósk að sameinast fjölskyldu sinni sem fór á mai thai og í bíó paradís að sjá jim carey leika í the eternal sunshine of the spotless mind.
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [eliasbe/IceBERT-finetuned-ner](https://huggingface.co/eliasbe/IceBERT-finetuned-ner) on the mim_gold_ner dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
LenaT/distilgpt2-finetuned-wikitext2
|
LenaT
| 2021-10-05T12:32:43Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
huggingtweets/wearosbygoogle
|
huggingtweets
| 2021-10-05T11:37:27Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/wearosbygoogle/1633433843674/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/974323315018948609/vqb04zdQ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Wear OS by Google</div>
<div style="text-align: center; font-size: 14px;">@wearosbygoogle</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Wear OS by Google.
| Data | Wear OS by Google |
| --- | --- |
| Tweets downloaded | 3201 |
| Retweets | 18 |
| Short tweets | 16 |
| Tweets kept | 3167 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/116bbt5f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wearosbygoogle's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2namz6ed) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2namz6ed/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wearosbygoogle')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hiiamsid/est5-base
|
hiiamsid
| 2021-10-05T07:35:26Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"spanish",
"es",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z
|
---
language: ["es"]
tags:
- spanish
license: mit
---
This is a smaller version of the [google/mt5-base](https://huggingface.co/google/mt5-base) model with only Spanish embeddings left.
* The original model has 582M parameters, with 237M of them being input and output embeddings.
* After shrinking the `sentencepiece` vocabulary from 250K to 25K (top 25K Spanish tokens) the number of model parameters reduced to 237M parameters, and model size reduced from 2.2GB to 0.9GB - 42% of the original one.
## Citing & Authors
- Datasets : [cleaned corpora](https://github.com/crscardellino/sbwce)
- Model : [google/mt5-base](https://huggingface.co/google/mt5-base)
- Reference: [cointegrated/rut5-base](https://huggingface.co/cointegrated/rut5-base)
|
huggingtweets/dervine7
|
huggingtweets
| 2021-10-05T05:53:32Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/dervine7/1633413178103/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374540783202734082/5l7zt3RK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dev, Bride of Kripkenstein</div>
<div style="text-align: center; font-size: 14px;">@dervine7</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dev, Bride of Kripkenstein.
| Data | Dev, Bride of Kripkenstein |
| --- | --- |
| Tweets downloaded | 3237 |
| Retweets | 177 |
| Short tweets | 272 |
| Tweets kept | 2788 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2j2ia8ja/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dervine7's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/287itbe2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/287itbe2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dervine7')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mrp/simcse-model-roberta-base-thai
|
mrp
| 2021-10-05T05:51:08Z
| 7
| 2
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2104.08821",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {mrp/simcse-model-roberta-base-thai}
This is a [sentence-transformers](https://www.SBERT.net) by using XLM-R as the baseline model model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SimCSE [here](https://arxiv.org/pdf/2104.08821.pdf) and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["ฉันนะคือคนรักชาติยังไงละ!", "พวกสามกีบล้มเจ้า!"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
|
mrp/simcse-model-distil-m-bert
|
mrp
| 2021-10-05T05:49:08Z
| 128
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2104.08821",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {mrp/simcse-model-distil-m-bert}
This is a [sentence-transformers](https://www.SBERT.net) by using m-Distil-BERT as the baseline model model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SimCSE [here](https://arxiv.org/pdf/2104.08821.pdf) and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["ฉันนะคือคนรักชาติยังไงละ!", "พวกสามกีบล้มเจ้า!"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
|
mrp/simcse-model-m-bert-thai-cased
|
mrp
| 2021-10-05T05:48:44Z
| 2,617
| 7
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2104.08821",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {mrp/simcse-model-m-bert-thai-cased}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SimCSE [here](https://arxiv.org/pdf/2104.08821.pdf) by using mBERT as the baseline model and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["ฉันนะคือคนรักชาติยังไงละ!", "พวกสามกีบล้มเจ้า!"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
|
smallbenchnlp/roberta-small
|
smallbenchnlp
| 2021-10-05T04:03:28Z
| 59
| 1
|
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
Small-Bench NLP is a benchmark for small efficient neural language models trained on a single GPU.
|
Titantoe/XLMR-ENIS-finetuned-ner
|
Titantoe
| 2021-10-05T00:54:03Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:agpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLMR-ENIS-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8713799976550592
- name: Recall
type: recall
value: 0.8450255827174531
- name: F1
type: f1
value: 0.8580004617871162
- name: Accuracy
type: accuracy
value: 0.9827265378338392
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0941
- Precision: 0.8714
- Recall: 0.8450
- F1: 0.8580
- Accuracy: 0.9827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0572 | 1.0 | 2904 | 0.0998 | 0.8586 | 0.8171 | 0.8373 | 0.9802 |
| 0.0313 | 2.0 | 5808 | 0.0868 | 0.8666 | 0.8288 | 0.8473 | 0.9822 |
| 0.0199 | 3.0 | 8712 | 0.0941 | 0.8714 | 0.8450 | 0.8580 | 0.9827 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Titantoe/IceBERT-finetuned-ner
|
Titantoe
| 2021-10-04T22:31:18Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:gpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: gpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: IceBERT-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8920083733530353
- name: Recall
type: recall
value: 0.8655753375552635
- name: F1
type: f1
value: 0.8785930867192238
- name: Accuracy
type: accuracy
value: 0.9855436530476731
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0772
- Precision: 0.8920
- Recall: 0.8656
- F1: 0.8786
- Accuracy: 0.9855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0519 | 1.0 | 2904 | 0.0731 | 0.8700 | 0.8564 | 0.8631 | 0.9832 |
| 0.026 | 2.0 | 5808 | 0.0749 | 0.8771 | 0.8540 | 0.8654 | 0.9840 |
| 0.0159 | 3.0 | 8712 | 0.0772 | 0.8920 | 0.8656 | 0.8786 | 0.9855 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
ueb1/distilbert-base-uncased-finetuned-ner
|
ueb1
| 2021-10-04T18:16:48Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9290229566374626
- name: Recall
type: recall
value: 0.9371294328224634
- name: F1
type: f1
value: 0.9330585876587213
- name: Accuracy
type: accuracy
value: 0.9839547555880344
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0608
- Precision: 0.9290
- Recall: 0.9371
- F1: 0.9331
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2276 | 1.0 | 878 | 0.0685 | 0.9204 | 0.9246 | 0.9225 | 0.9814 |
| 0.0498 | 2.0 | 1756 | 0.0622 | 0.9238 | 0.9358 | 0.9298 | 0.9833 |
| 0.0298 | 3.0 | 2634 | 0.0608 | 0.9290 | 0.9371 | 0.9331 | 0.9840 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat
|
andi611
| 2021-10-04T14:52:03Z
| 74
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"en",
"dataset:squad_v2",
"dataset:conll2003",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z
|
---
language:
- en
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
- conll2003
model_index:
- name: bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: squad_v2
type: squad_v2
args: conll2003
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the conll2003 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
Elron/bleurt-tiny-128
|
Elron
| 2021-10-04T13:27:02Z
| 5
| 2
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z
|
\n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-tiny-512")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-tiny-512")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([-1.0563, -0.3004])
```
|
Elron/bleurt-base-512
|
Elron
| 2021-10-04T13:23:33Z
| 317
| 1
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z
|
\n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-base-512")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-base-512")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([1.0327, 0.2055])
```
|
KBLab/swedish-spacy-pipeline
|
KBLab
| 2021-10-04T13:18:01Z
| 1
| 2
|
spacy
|
[
"spacy",
"token-classification",
"sv",
"license:mit",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z
|
---
tags:
- spacy
- token-classification
language:
- sv
license: mit
model-index:
- name: sv_pipeline
results:
- task:
name: POS
type: token-classification
metrics:
- name: POS Accuracy
type: accuracy
value: 0.9818079056
- task:
name: SENTER
type: token-classification
metrics:
- name: SENTER Precision
type: precision
value: 0.9212548015
- name: SENTER Recall
type: recall
value: 0.9368489583
- name: SENTER F Score
type: f_score
value: 0.9289864429
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Dependencies Accuracy
type: accuracy
value: 0.9198832946
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Dependencies Accuracy
type: accuracy
value: 0.9198832946
---
|
MultiBertGunjanPatrick/multiberts-seed-9
|
MultiBertGunjanPatrick
| 2021-10-04T05:47:01Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 (uncased)
Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0')
model = BertModel.from_pretrained("multiberts-seed-0")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-7
|
MultiBertGunjanPatrick
| 2021-10-04T05:41:49Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 (uncased)
Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0')
model = BertModel.from_pretrained("multiberts-seed-0")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4
|
MultiBertGunjanPatrick
| 2021-10-04T05:35:14Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 (uncased)
Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0')
model = BertModel.from_pretrained("multiberts-seed-0")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-2000k
|
MultiBertGunjanPatrick
| 2021-10-04T05:12:58Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 2000k (uncased)
Seed 4 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-2000k')
model = BertModel.from_pretrained("multiberts-seed-4-2000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-1900k
|
MultiBertGunjanPatrick
| 2021-10-04T05:12:51Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 1900k (uncased)
Seed 4 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-1900k')
model = BertModel.from_pretrained("multiberts-seed-4-1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-1600k
|
MultiBertGunjanPatrick
| 2021-10-04T05:12:31Z
| 7
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 1600k (uncased)
Seed 4 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-1600k')
model = BertModel.from_pretrained("multiberts-seed-4-1600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-1400k
|
MultiBertGunjanPatrick
| 2021-10-04T05:12:17Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 1400k (uncased)
Seed 4 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-1400k')
model = BertModel.from_pretrained("multiberts-seed-4-1400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-1000k
|
MultiBertGunjanPatrick
| 2021-10-04T05:11:48Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 1000k (uncased)
Seed 4 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-1000k')
model = BertModel.from_pretrained("multiberts-seed-4-1000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-700k
|
MultiBertGunjanPatrick
| 2021-10-04T05:11:26Z
| 7
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 700k (uncased)
Seed 4 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-700k')
model = BertModel.from_pretrained("multiberts-seed-4-700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-300k
|
MultiBertGunjanPatrick
| 2021-10-04T05:10:55Z
| 1
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 300k (uncased)
Seed 4 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-300k')
model = BertModel.from_pretrained("multiberts-seed-4-300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-200k
|
MultiBertGunjanPatrick
| 2021-10-04T05:10:41Z
| 1
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 200k (uncased)
Seed 4 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-200k')
model = BertModel.from_pretrained("multiberts-seed-4-200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-180k
|
MultiBertGunjanPatrick
| 2021-10-04T05:10:34Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 180k (uncased)
Seed 4 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-180k')
model = BertModel.from_pretrained("multiberts-seed-4-180k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-160k
|
MultiBertGunjanPatrick
| 2021-10-04T05:10:26Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 160k (uncased)
Seed 4 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-160k')
model = BertModel.from_pretrained("multiberts-seed-4-160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-140k
|
MultiBertGunjanPatrick
| 2021-10-04T05:10:19Z
| 1
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 140k (uncased)
Seed 4 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-140k')
model = BertModel.from_pretrained("multiberts-seed-4-140k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-80k
|
MultiBertGunjanPatrick
| 2021-10-04T05:09:58Z
| 1
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 80k (uncased)
Seed 4 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-80k')
model = BertModel.from_pretrained("multiberts-seed-4-80k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-60k
|
MultiBertGunjanPatrick
| 2021-10-04T05:09:51Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 60k (uncased)
Seed 4 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-60k')
model = BertModel.from_pretrained("multiberts-seed-4-60k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-20k
|
MultiBertGunjanPatrick
| 2021-10-04T05:09:37Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 20k (uncased)
Seed 4 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-20k')
model = BertModel.from_pretrained("multiberts-seed-4-20k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-0k
|
MultiBertGunjanPatrick
| 2021-10-04T05:09:30Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 0k (uncased)
Seed 4 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-0k')
model = BertModel.from_pretrained("multiberts-seed-4-0k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-2000k
|
MultiBertGunjanPatrick
| 2021-10-04T05:09:23Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 2000k (uncased)
Seed 3 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-2000k')
model = BertModel.from_pretrained("multiberts-seed-3-2000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-1800k
|
MultiBertGunjanPatrick
| 2021-10-04T05:09:08Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 1800k (uncased)
Seed 3 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1800k')
model = BertModel.from_pretrained("multiberts-seed-3-1800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-1100k
|
MultiBertGunjanPatrick
| 2021-10-04T05:08:14Z
| 1
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 1100k (uncased)
Seed 3 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1100k')
model = BertModel.from_pretrained("multiberts-seed-3-1100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-1000k
|
MultiBertGunjanPatrick
| 2021-10-04T05:08:07Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 1000k (uncased)
Seed 3 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1000k')
model = BertModel.from_pretrained("multiberts-seed-3-1000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-900k
|
MultiBertGunjanPatrick
| 2021-10-04T05:08:00Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 900k (uncased)
Seed 3 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-900k')
model = BertModel.from_pretrained("multiberts-seed-3-900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-800k
|
MultiBertGunjanPatrick
| 2021-10-04T05:07:53Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 800k (uncased)
Seed 3 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-800k')
model = BertModel.from_pretrained("multiberts-seed-3-800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-600k
|
MultiBertGunjanPatrick
| 2021-10-04T05:07:39Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 600k (uncased)
Seed 3 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-600k')
model = BertModel.from_pretrained("multiberts-seed-3-600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-500k
|
MultiBertGunjanPatrick
| 2021-10-04T05:07:32Z
| 9
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 500k (uncased)
Seed 3 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-500k')
model = BertModel.from_pretrained("multiberts-seed-3-500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-400k
|
MultiBertGunjanPatrick
| 2021-10-04T05:07:25Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 400k (uncased)
Seed 3 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-400k')
model = BertModel.from_pretrained("multiberts-seed-3-400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-140k
|
MultiBertGunjanPatrick
| 2021-10-04T05:06:44Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 140k (uncased)
Seed 3 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-140k')
model = BertModel.from_pretrained("multiberts-seed-3-140k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-120k
|
MultiBertGunjanPatrick
| 2021-10-04T05:06:36Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 120k (uncased)
Seed 3 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-120k')
model = BertModel.from_pretrained("multiberts-seed-3-120k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-100k
|
MultiBertGunjanPatrick
| 2021-10-04T05:06:29Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 100k (uncased)
Seed 3 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-100k')
model = BertModel.from_pretrained("multiberts-seed-3-100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-60k
|
MultiBertGunjanPatrick
| 2021-10-04T05:06:15Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 60k (uncased)
Seed 3 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-60k')
model = BertModel.from_pretrained("multiberts-seed-3-60k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-20k
|
MultiBertGunjanPatrick
| 2021-10-04T05:06:01Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 20k (uncased)
Seed 3 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-20k')
model = BertModel.from_pretrained("multiberts-seed-3-20k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-3-0k
|
MultiBertGunjanPatrick
| 2021-10-04T05:05:53Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-3
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 3 Checkpoint 0k (uncased)
Seed 3 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-0k')
model = BertModel.from_pretrained("multiberts-seed-3-0k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-2-1800k
|
MultiBertGunjanPatrick
| 2021-10-04T05:05:29Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-2
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 2 Checkpoint 1800k (uncased)
Seed 2 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1800k')
model = BertModel.from_pretrained("multiberts-seed-2-1800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-2-1600k
|
MultiBertGunjanPatrick
| 2021-10-04T05:05:14Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-2
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 2 Checkpoint 1600k (uncased)
Seed 2 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1600k')
model = BertModel.from_pretrained("multiberts-seed-2-1600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.