modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
hiiamsid/autonlp-Summarization-20684328 | hiiamsid | 2021-10-19T05:09:38Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autonlp",
"es",
"dataset:hiiamsid/autonlp-data-Summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: es
widget:
- text: "I love AutoNLP 🤗"
datasets:
- hiiamsid/autonlp-data-Summarization
co2_eq_emissions: 1133.9679082840014
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 20684328
- CO2 Emissions (in grams): 1133.9679082840014
## Validation Metrics
- Loss: nan
- Rouge1: 9.4193
- Rouge2: 0.91
- RougeL: 7.9376
- RougeLsum: 8.0076
- Gen Len: 10.65
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/hiiamsid/autonlp-Summarization-20684328
``` |
bdwjaya/t5-small-finetuned-xsum | bdwjaya | 2021-10-19T03:34:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
tiennvcs/distilbert-base-uncased-finetuned-squad | tiennvcs | 2021-10-19T02:41:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yazdipour/sparql-qald9-t5-small-2021-10-19_00-01 | yazdipour | 2021-10-19T00:13:21Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-small-2021-10-19_00-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-small-2021-10-19_00-01
This model is a fine-tuned version of [yazdipour/text-to-sparql-t5-small-2021-10-18_23-00](https://huggingface.co/yazdipour/text-to-sparql-t5-small-2021-10-18_23-00) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:-------------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 2.4058 | 19.0 | 0.3946 | 0.0660 | 0.2253 | 9.8438 | [72.36042012161415, 47.920433996383366, 33.929754804506295, 26.416482707873435] | 0.2344 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yazdipour/text-to-sparql-t5-small-2021-10-18_23-00 | yazdipour | 2021-10-19T00:01:17Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: text-to-sparql-t5-small-2021-10-18_23-00
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-18_23-00
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2284
- Gen Len: 19.0
- Bertscorer-p: 0.5644
- Bertscorer-r: 0.0815
- Bertscorer-f1: 0.3120
- Sacrebleu-score: 5.5690
- Sacrebleu-precisions: [89.6746395837541, 79.06489438259324, 71.93407601726916, 67.21220306665607]
- Bleu-bp: 0.0728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:---------------------------------------------------------------------------:|:-------:|
| 0.2808 | 1.0 | 4772 | 0.2284 | 19.0 | 0.5644 | 0.0815 | 0.3120 | 5.5690 | [89.6746395837541, 79.06489438259324, 71.93407601726916, 67.21220306665607] | 0.0728 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
castorini/tct_colbert-v2-msmarco-cqe | castorini | 2021-10-18T23:34:32Z | 9 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | This model is to reproduce Contextualized Query Embeddings for Conversational Search described in the following paper:
> Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [Contextualized Query Embeddings for Conversational Search.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_EMNLP2021.pdf) EMNLP, Nov 2021.
This model is finetuend only on query ecoder with frezzed passage encoder. The starting point is the [tct_colbert-msmarco](https://huggingface.co/castorini/tct_colbert-msmarco/tree/main). The detailed usage of the model will be out soon on [Chatty Goose](https://github.com/castorini/chatty-goose). You can also check the fine-tuning and inference using tensorflow in our [CQE repo](https://github.com/castorini/CQE) |
gagan3012/pickuplines | gagan3012 | 2021-10-18T19:53:36Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: pickuplines
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pickuplines
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
gagan3012/model | gagan3012 | 2021-10-18T18:23:58Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
mmcquade11/autonlp-imdb-test-21134453 | mmcquade11 | 2021-10-18T17:47:59Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:mmcquade11/autonlp-data-imdb-test",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- mmcquade11/autonlp-data-imdb-test
co2_eq_emissions: 38.102565360610484
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 21134453
- CO2 Emissions (in grams): 38.102565360610484
## Validation Metrics
- Loss: 0.172550767660141
- Accuracy: 0.9355
- Precision: 0.9362853135644159
- Recall: 0.9346
- AUC: 0.98267064
- F1: 0.9354418977079372
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/mmcquade11/autonlp-imdb-test-21134453
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mmcquade11/autonlp-imdb-test-21134453", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mmcquade11/autonlp-imdb-test-21134453", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
huggingtweets/muratpak | huggingtweets | 2021-10-18T17:22:31Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/muratpak/1634577747584/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442159742558765064/RFB5JjIk_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pak</div>
<div style="text-align: center; font-size: 14px;">@muratpak</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pak.
| Data | Pak |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 686 |
| Short tweets | 964 |
| Tweets kept | 1600 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1s58abff/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @muratpak's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/30zzcgkm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/30zzcgkm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/muratpak')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
cambridgeltl/trans-encoder-bi-simcse-roberta-base | cambridgeltl | 2021-10-18T13:29:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2109.13059",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-roberta-base
An unsupervised sentence encoder (bi-encoder) proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2109.13059.pdf). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using [princeton-nlp/unsup-simcse-roberta-base](https://huggingface.co/princeton-nlp/unsup-simcse-roberta-base) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
### Citation
```bibtex
@article{liu2021trans,
title={Trans-Encoder: Unsupervised sentence-pair modelling through self-and mutual-distillations},
author={Liu, Fangyu and Jiao, Yunlong and Massiah, Jordan and Yilmaz, Emine and Havrylov, Serhii},
journal={arXiv preprint arXiv:2109.13059},
year={2021}
}
```
|
lewtun/results | lewtun | 2021-10-18T13:16:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9251012149383893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2147
- Accuracy: 0.925
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8221 | 1.0 | 250 | 0.3106 | 0.9125 | 0.9102 |
| 0.2537 | 2.0 | 500 | 0.2147 | 0.925 | 0.9251 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
|
AyushPJ/test-squad-trained-finetuned-squad | AyushPJ | 2021-10-18T11:01:55Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: test-squad-trained-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-squad-trained-finetuned-squad
This model was trained from scratch on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1+cu110
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Ching/negation_detector | Ching | 2021-10-18T10:32:43Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | This question answering model was fine tuned to detect negation expressions
How to use:
question: negation
context: That is not safe!
Answer: not
question: negation
context: Weren't we going to go to the moon?
Answer: Weren't
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy | CAMeL-Lab | 2021-10-18T10:18:01Z | 134 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'عامل ايه ؟'
---
# CAMeLBERT-CA POS-EGY Model
## Model description
**CAMeLBERT-CA POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy')
>>> text = 'عامل ايه ؟'
>>> pos(text)
[{'entity': 'adj', 'score': 0.9990943, 'index': 1, 'word': 'عامل', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.99863535, 'index': 2, 'word': 'ايه', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.99990875, 'index': 3, 'word': '؟', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy | CAMeL-Lab | 2021-10-18T10:17:22Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'عامل ايه ؟'
---
# CAMeLBERT-MSA POS-EGY Model
## Model description
**CAMeLBERT-MSA POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy')
>>> text = 'عامل ايه ؟'
>>> pos(text)
[{'entity': 'adj', 'score': 0.99979395, 'index': 1, 'word': 'عامل', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.998192, 'index': 2, 'word': 'ايه', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.99929804, 'index': 3, 'word': '؟', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy | CAMeL-Lab | 2021-10-18T10:15:57Z | 12 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'عامل ايه ؟'
---
# CAMeLBERT-Mix POS-EGY Model
## Model description
**CAMeLBERT-Mix POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy')
>>> text = 'عامل ايه ؟'
>>> pos(text)
[{'entity': 'adj', 'score': 0.9972628, 'index': 1, 'word': 'عامل', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.9525163, 'index': 2, 'word': 'ايه', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.99869114, 'index': 3, 'word': '؟', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy | CAMeL-Lab | 2021-10-18T10:15:37Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'عامل ايه ؟'
---
# CAMeLBERT-DA POS-EGY Model
## Model description
**CAMeLBERT-DA POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-DA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy')
>>> text = 'عامل ايه ؟'
>>> pos(text)
[{'entity': 'adj', 'score': 0.99843216, 'index': 1, 'word': 'عامل', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.9990083, 'index': 2, 'word': 'ايه', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.82973784, 'index': 3, 'word': '؟', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf | CAMeL-Lab | 2021-10-18T10:13:34Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'شلونك ؟ شخبارك ؟'
---
# CAMeLBERT-CA POS-GLF Model
## Model description
**CAMeLBERT-CA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA POS-GLF model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf')
>>> text = 'شلونك ؟ شخبارك ؟'
>>> pos(text)
[{'entity': 'noun', 'score': 0.99572617, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'noun', 'score': 0.9411187, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999661, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.99286526, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.9983397, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'noun', 'score': 0.9609381, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999668, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa | CAMeL-Lab | 2021-10-18T09:44:42Z | 1,178 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
---
# CAMeLBERT-Mix POS-MSA Model
## Model description
**CAMeLBERT-Mix POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix POS-MSA model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa')
>>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
>>> pos(text)
[{'entity': 'noun', 'score': 0.9999592, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.9997877, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.9998405, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.9697179, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.99967164, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.99980617, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99997973, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.99995637, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.9983974, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.9999469, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.9993273, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
yazdipour/text-to-sparql-t5-base-2021-10-17_23-40 | yazdipour | 2021-10-18T02:23:08Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-base-2021-10-17_23-40
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.2649857699871063
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-base-2021-10-17_23-40
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2645
- Gen Len: 19.0
- P: 0.5125
- R: 0.0382
- F1: 0.2650
- Score: 5.1404
- Bleu-precisions: [88.49268497650789, 75.01025204252232, 66.60779038484033, 63.18383699935422]
- Bleu-bp: 0.0707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.3513 | 1.0 | 4807 | 0.2645 | 19.0 | 0.5125 | 0.0382 | 0.2650 | 5.1404 | [88.49268497650789, 75.01025204252232, 66.60779038484033, 63.18383699935422] | 0.0707 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
tal-yifat/injury-report-distilgpt2-test | tal-yifat | 2021-10-18T02:15:31Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: injury-report-distilgpt2-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# injury-report-distilgpt2-test
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 380 | 3.6525 |
| 3.9116 | 2.0 | 760 | 3.5507 |
| 3.6015 | 3.0 | 1140 | 3.5243 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
airKlizz/bert2bert-multi-fr-wiki-news | airKlizz | 2021-10-17T20:10:30Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: fr
license: mit
---
|
airKlizz/t5-base-multi-fr-wiki-news | airKlizz | 2021-10-17T20:09:42Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"fr",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: fr
license: mit
---
|
yazdipour/text-to-sparql-t5-small-2021-10-17_18-47 | yazdipour | 2021-10-17T19:48:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-small-2021-10-17_18-47
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.2345714420080185
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-17_18-47
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5258
- Gen Len: 19.0
- P: 0.4582
- R: 0.0278
- F1: 0.2346
- Score: 3.5848
- Bleu-precisions: [82.57739877107295, 62.13358857503344, 48.43062944877681, 41.90172321318059]
- Bleu-bp: 0.0631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.7575 | 1.0 | 4807 | 0.5258 | 19.0 | 0.4582 | 0.0278 | 0.2346 | 3.5848 | [82.57739877107295, 62.13358857503344, 48.43062944877681, 41.90172321318059] | 0.0631 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5 | CAMeL-Lab | 2021-10-17T13:35:38Z | 1,090 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-MSA DID MADAR Twitter-5 Model
## Model description
**CAMeLBERT-MSA DID MADAR Twitter-5 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [MADAR Twitter-5](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA DID MADAR Twitter-5 model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.5741344094276428},
{'label': 'Kuwait', 'score': 0.5225679278373718}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
biu-nlp/cdlm | biu-nlp | 2021-10-17T12:24:59Z | 45 | 1 | transformers | [
"transformers",
"pytorch",
"longformer",
"fill-mask",
"cdlm",
"en",
"arxiv:2101.00406",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: en
tags:
- longformer
- cdlm
license: apache-2.0
inference: false
---
# Cross-Document Language Modeling
CDLM: Cross-Document Language Modeling.
Avi Caciularu, Arman Cohan, Iz Beltagy, Matthew E Peters, Arie Cattan and Ido Dagan. In EMNLP Findings, 2021. [PDF](https://arxiv.org/pdf/2101.00406.pdf)
Please note that during our pretraining we used the document and sentence separators, which you might want to add to your data. The document and sentence separators are `<doc-s>`, `</doc-s>` (the last two tokens in the vocabulary), and `<s>`, `</s>`, respectively.
```python
from transformers import AutoTokenizer, AutoModel
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('biu-nlp/cdlm')
model = AutoModel.from_pretrained('biu-nlp/cdlm')
```
The original repo is [here](https://github.com/aviclu/CDLM).
If you find our work useful, please cite the paper as:
```python
@article{caciularu2021cross,
title={Cross-Document Language Modeling},
author={Caciularu, Avi and Cohan, Arman and Beltagy, Iz and Peters, Matthew E and Cattan, Arie and Dagan, Ido},
journal={Findings of the Association for Computational Linguistics: EMNLP 2021},
year={2021}
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | CAMeL-Lab | 2021-10-17T12:09:56Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-DA Poetry Classification Model
## Model description
**CAMeLBERT-DA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9874765276908875},
{'label': 'السلسلة', 'score': 0.6877778172492981}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | CAMeL-Lab | 2021-10-17T12:09:38Z | 13 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-CA Poetry Classification Model
## Model description
**CAMeLBERT-CA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9845284819602966},
{'label': 'الكامل', 'score': 0.752918004989624}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment | CAMeL-Lab | 2021-10-17T12:08:30Z | 475 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT MSA SA Model
## Model description
**CAMeLBERT MSA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT MSA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
castorini/monot5-large-msmarco | castorini | 2021-10-17T11:20:56Z | 576 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | This model is a T5-large reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs).
For more details on how to use it, check the following links:
- [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example)
- [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md)
- [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/) |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26 | CAMeL-Lab | 2021-10-17T11:17:23Z | 29 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-Mix DID Madar Corpus26 Model
## Model description
**CAMeLBERT-Mix DID Madar Corpus26 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [MADAR Corpus 26](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 26 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix DID Madar Corpus26 model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar26')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'CAI', 'score': 0.8751305937767029},
{'label': 'DOH', 'score': 0.9867215156555176}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment | CAMeL-Lab | 2021-10-17T11:15:54Z | 7,487 | 43 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT-DA SA Model
## Model description
**CAMeLBERT-DA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment | CAMeL-Lab | 2021-10-17T11:15:12Z | 35 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT-CA SA Model
## Model description
**CAMeLBERT-CA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
e
>>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-mix-ner | CAMeL-Lab | 2021-10-17T11:13:00Z | 107,110 | 12 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: "إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع"
---
# CAMeLBERT-Mix NER Model
## Model description
**CAMeLBERT-Mix NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678).
"* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component:
```python
>>> from camel_tools.ner import NERecognizer
>>> from camel_tools.tokenizers.word import simple_word_tokenize
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-mix-ner')
>>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع')
>>> ner.predict_sentence(sentence)
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
```
You can also use the NER model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-ner')
>>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع")
[{'word': 'أبوظبي',
'score': 0.9895730018615723,
'entity': 'B-LOC',
'index': 2,
'start': 6,
'end': 12},
{'word': 'الإمارات',
'score': 0.8156259655952454,
'entity': 'B-LOC',
'index': 8,
'start': 33,
'end': 41},
{'word': 'العربية',
'score': 0.890906810760498,
'entity': 'I-LOC',
'index': 9,
'start': 42,
'end': 49},
{'word': 'المتحدة',
'score': 0.8169114589691162,
'entity': 'I-LOC',
'index': 10,
'start': 50,
'end': 57}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi | CAMeL-Lab | 2021-10-17T11:05:21Z | 33 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-MSA DID NADI Model
## Model description
**CAMeLBERT-MSA DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [NADI Coountry-level](https://sites.google.com/view/nadi-shared-task) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA DID NADI model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.9242768287658691},
{'label': 'Saudi_Arabia', 'score': 0.3400847613811493}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
fdominik98/bert-base-hu-cased-ner | fdominik98 | 2021-10-17T10:48:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | This model is the fine-tuned model of "akdeniz27/bert-base-hungarian-cased-ner" using WikiANN-hu dataset. |
lucius/distilroberta-base-finetuned-wikitext2 | lucius | 2021-10-17T10:40:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0827 | 1.0 | 2406 | 1.9227 |
| 1.9993 | 2.0 | 4812 | 1.8828 |
| 1.9614 | 3.0 | 7218 | 1.8172 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
MaryaAI/opus-mt-en-ar-finetuned-Math-13-10-en-to-ar | MaryaAI | 2021-10-17T08:27:27Z | 257 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:syssr_en_ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- syssr_en_ar
model-index:
- name: opus-mt-en-ar-finetuned-Math-13-10-en-to-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-finetuned-Math-13-10-en-to-ar
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the syssr_en_ar dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.0
- Tokenizers 0.10.3
|
gwima/ryan-sackmott | gwima | 2021-10-17T03:15:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
---
|
amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061 | amansolanki | 2021-10-17T00:32:35Z | 1,906 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:amansolanki/autonlp-data-Tweet-Sentiment-Extraction",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- amansolanki/autonlp-data-Tweet-Sentiment-Extraction
co2_eq_emissions: 3.651199395353127
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 20114061
- CO2 Emissions (in grams): 3.651199395353127
## Validation Metrics
- Loss: 0.5046541690826416
- Accuracy: 0.8036219581211093
- Macro F1: 0.807095210403678
- Micro F1: 0.8036219581211093
- Weighted F1: 0.8039634739225368
- Macro Precision: 0.8076842795233988
- Micro Precision: 0.8036219581211093
- Weighted Precision: 0.8052135235094771
- Macro Recall: 0.8075241470527056
- Micro Recall: 0.8036219581211093
- Weighted Recall: 0.8036219581211093
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
gagandeepkundi/latam-question-quality | gagandeepkundi | 2021-10-16T16:32:19Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"es",
"dataset:gagandeepkundi/autonlp-data-text-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: es
widget:
- text: "I love AutoNLP 🤗"
datasets:
- gagandeepkundi/autonlp-data-text-classification
co2_eq_emissions: 20.790169878009916
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 19984005
- CO2 Emissions (in grams): 20.790169878009916
## Validation Metrics
- Loss: 0.06693269312381744
- Accuracy: 0.9789
- Precision: 0.9843244336569579
- Recall: 0.9733
- AUC: 0.99695552
- F1: 0.9787811745776348
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/gagandeepkundi/autonlp-text-classification-19984005
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("gagandeepkundi/autonlp-text-classification-19984005", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("gagandeepkundi/autonlp-text-classification-19984005", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
huggingtweets/the_nftking | huggingtweets | 2021-10-16T14:11:01Z | 4 | 3 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/the_nftking/1634393457706/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1434700639649599488/J63TSf--_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NFT KING 👑</div>
<div style="text-align: center; font-size: 14px;">@the_nftking</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NFT KING 👑.
| Data | NFT KING 👑 |
| --- | --- |
| Tweets downloaded | 163 |
| Retweets | 23 |
| Short tweets | 36 |
| Tweets kept | 104 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26d96n9m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @the_nftking's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/f7wd0e6f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/f7wd0e6f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/the_nftking')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lewtun/xlm-roberta-base-finetuned-marc-de | lewtun | 2021-10-16T11:38:18Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9934
- Mae: 0.4867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1514 | 1.0 | 308 | 1.0455 | 0.5221 |
| 0.9997 | 2.0 | 616 | 0.9934 | 0.4867 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
huggingartists/slava-marlow | huggingartists | 2021-10-16T10:37:58Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/slava-marlow",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/slava-marlow
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e308b1bc9eeb159ecfa9d807d715f095.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">SLAVA MARLOW</div>
<a href="https://genius.com/artists/slava-marlow">
<div style="text-align: center; font-size: 14px;">@slava-marlow</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from SLAVA MARLOW.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/slava-marlow).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/slava-marlow")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1fdcz1s5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on SLAVA MARLOW's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/ro4q353s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/ro4q353s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/slava-marlow')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/slava-marlow")
model = AutoModelWithLMHead.from_pretrained("huggingartists/slava-marlow")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
kbhugging/autonlp-text2sql-18413376 | kbhugging | 2021-10-15T02:36:42Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autonlp",
"unk",
"dataset:kbhugging/autonlp-data-text2sql",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- kbhugging/autonlp-data-text2sql
co2_eq_emissions: 1.4091714704861447
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 18413376
- CO2 Emissions (in grams): 1.4091714704861447
## Validation Metrics
- Loss: 0.26672711968421936
- Rouge1: 61.765
- Rouge2: 52.5778
- RougeL: 61.3222
- RougeLsum: 61.1905
- Gen Len: 18.7805
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/kbhugging/autonlp-text2sql-18413376
``` |
huggingartists/shadowraze | huggingartists | 2021-10-15T02:02:54Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/shadowraze",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/shadowraze
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e2576b95c2049862de20cbd0f1a4e0d7.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">shadowraze</div>
<a href="https://genius.com/artists/shadowraze">
<div style="text-align: center; font-size: 14px;">@shadowraze</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from shadowraze.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/shadowraze).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/shadowraze")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/pkbkflsq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on shadowraze's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/tiu2mjo1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/tiu2mjo1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/shadowraze')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/shadowraze")
model = AutoModelWithLMHead.from_pretrained("huggingartists/shadowraze")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
lincoln/flaubert-mlsum-topic-classification | lincoln | 2021-10-14T13:26:57Z | 61 | 11 | transformers | [
"transformers",
"pytorch",
"tf",
"flaubert",
"text-classification",
"fr",
"dataset:MLSUM",
"arxiv:2004.14900",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- fr
license: mit
datasets:
- MLSUM
pipeline_tag: "text-classification"
widget:
- text: La bourse de paris en forte baisse après que des canards ont envahit le parlement.
tags:
- text-classification
- flaubert
---
# Classification d'articles de presses avec Flaubert
Ce modèle se base sur le modèle [`flaubert/flaubert_base_cased`](https://huggingface.co/flaubert/flaubert_base_cased) et à été fine-tuné en utilisant des articles de presse issus de la base de données MLSUM.
Dans leur papier, les équipes de reciTAL et de la Sorbonne ont proposé comme ouverture de réaliser un modèle de détection de topic sur les articles de presse.
Les topics ont été extrait à partir des URL et nous avons effectué une étape de regroupement de topics pour éliminer ceux avec un trop faible volume et ceux qui paraissaient redondants.
Nous avons finalement utilisé la liste de topics avec les regroupements suivants:
* __Economie__: economie, argent, emploi, entreprises, economie-francaise, immobilier, crise-financiere, evasion-fiscale, economie-mondiale, m-voiture, smart-cities, automobile, logement, flottes-d-entreprise, import, crise-de-l-euro, guide-des-impots, le-club-de-l-economie, telephonie-mobile
* __Opinion__: idees, les-decodeurs, tribunes
* __Politique__: politique, election-presidentielle-2012, election-presidentielle-2017, elections-americaines, municipales, referendum-sur-le-brexit, elections-legislatives-2017, elections-regionales, donald-trump, elections-regionales-2015, europeennes-2014, elections-cantonales-2011, primaire-parti-socialiste, gouvernement-philippe, elections-departementales-2015, chroniques-de-la-presidence-trump, primaire-de-la-gauche, la-republique-en-marche, elections-americaines-mi-mandat-2018, elections, elections-italiennes, elections-senatoriales
* __Societe__: societe, sante, attaques-a-paris, immigration-et-diversite, religions, medecine, francaises-francais, mobilite
* __Culture__: televisions-radio, musiques, festival, arts, scenes, festival-de-cannes, mode, bande-dessinee, architecture, vins, photo, m-mode, fashion-week, les-recettes-du-monde, tele-zapping, critique-litteraire, festival-d-avignon, m-gastronomie-le-lieu, les-enfants-akira, gastronomie, culture, livres, cinema, actualite-medias, blog, m-gastronomie
* __Sport__: sport, football, jeux-olympiques, ligue-1, tennis, coupe-du-monde, mondial-2018, rugby, euro-2016, jeux-olympiques-rio-2016, cyclisme, ligue-des-champions, basket, roland-garros, athletisme, tour-de-france, euro2012, jeux-olympiques-pyeongchang-2018, coupe-du-monde-rugby, formule-1, voile, top-14, ski, handball, sports-mecaniques, sports-de-combat, blog-du-tour-de-france, sport-et-societe, sports-de-glisse, tournoi-des-6-nations
* __Environement__: planete, climat, biodiversite, pollution, energies, cop21
* __Technologie__: pixels, technologies, sciences, cosmos, la-france-connectee, trajectoires-digitales
* __Education__: campus, education, bac-lycee, enseignement-superieur, ecole-primaire-et-secondaire, o21, orientation-scolaire, brevet-college
* __Justice__: police-justice, panama-papers, affaire-penelope-fillon, documents-wikileaks, enquetes, paradise-papers
Les thèmes ayant moins de 100 articles n'ont pas été pris en compte.
Nous avons également mis de côté les articles faisant référence à des topics geographiques, ce qui a donné lieu à un nouveau modèle de classification.
Après nettoyage, la base MLSUM a été réduite à 293 995 articles. Le corps d'un article en moyenne comporte 694 tokens.
Nous avons entrainé le modèle sur 20% de la base nettoyée. En moyenne, le nombre d'articles par classe est de ~4K.
## Entrainement
Nous avons benchmarké différents modèles en les entrainant sur différentes parties des articles (titre, résumé, corps et titre+résumé) et avec des échantillons d'apprentissage de tailles différentes.

Les modèles ont été entrainé sur le cloud Azure avec des Tesla V100.
## Modèle
Le modèle partagé sur HF est le modéle qui prend en entrée le corps d'un article. Nous l'avons entrainé sur 20% du jeu de donnée nettoyé.
## Résulats

*Les lignes correspondent aux labels prédits et les colonnes aux véritables topics. Les pourcentages sont calculés sur les colonnes.*
_Nous garantissons pas les résultats sur le long terme. Modèle réalisé dans le cadre d'un POC._
## Utilisation
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import TextClassificationPipeline
model_name = 'lincoln/flaubert-mlsum-topic-classification'
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
loaded_model = AutoModelForSequenceClassification.from_pretrained(model_name)
nlp = TextClassificationPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("Le Bayern Munich prend la grenadine.", truncation=True)
```
## Citation
```bibtex
@article{scialom2020mlsum,
title={MLSUM: The Multilingual Summarization Corpus},
author={Thomas Scialom and Paul-Alexis Dray and Sylvain Lamprier and Benjamin Piwowarski and Jacopo Staiano},
year={2020},
eprint={2004.14900},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
S34NtheGuy/DialoGPT-medium-Glass_Of_Water | S34NtheGuy | 2021-10-14T12:28:04Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
tags:
- conversational
---
# DialoGPT chat bot model using discord messages as data |
joyebright/Top4-with-mixing | joyebright | 2021-10-14T10:09:56Z | 0 | 0 | null | [
"translation",
"en",
"fr",
"dataset:wmt",
"dataset:iwslt2014",
"license:apache-2.0",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- en
- fr
tags:
- translation
license: apache-2.0
datasets:
- wmt
- iwslt2014
metrics:
- bleu
- ter
- chrf2
- sacrebleu
---
|
joyebright/Top3-without-mixing | joyebright | 2021-10-14T10:09:38Z | 0 | 0 | null | [
"translation",
"en",
"fr",
"dataset:wmt",
"dataset:iwslt2014",
"license:apache-2.0",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- en
- fr
tags:
- translation
license: apache-2.0
datasets:
- wmt
- iwslt2014
metrics:
- bleu
- ter
- chrf2
- sacrebleu
---
|
joyebright/Top6-with-mixing | joyebright | 2021-10-14T10:09:15Z | 0 | 0 | null | [
"translation",
"en",
"fr",
"dataset:wmt",
"dataset:iwslt2014",
"license:apache-2.0",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- en
- fr
tags:
- translation
license: apache-2.0
datasets:
- wmt
- iwslt2014
metrics:
- bleu
- ter
- chrf2
- sacrebleu
---
|
joyebright/Top4-without-mixing | joyebright | 2021-10-14T10:08:37Z | 0 | 0 | null | [
"translation",
"en",
"fr",
"dataset:wmt",
"dataset:iwslt2014",
"license:apache-2.0",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- en
- fr
tags:
- translation
license: apache-2.0
datasets:
- wmt
- iwslt2014
metrics:
- bleu
- ter
- chrf2
- sacrebleu
---
|
dhtocks/tunib-electra-stereotype-classifier | dhtocks | 2021-10-14T10:03:57Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ### TUNiB-Electra Stereotype Detector
Finetuned TUNiB-Electra base with K-StereoSet.
Original Code: https://github.com/newfull5/Stereotype-Detector |
Langboat/mengzi-bert-base | Langboat | 2021-10-14T09:01:34Z | 77 | 37 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"doi:10.57967/hf/0023",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language:
- zh
license: apache-2.0
widget:
- text: "生活的真谛是[MASK]。"
---
# Mengzi-BERT base model (Chinese)
Pretrained model on 300G Chinese corpus. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
[Mengzi: A lightweight yet Powerful Chinese Pre-trained Language Model](https://arxiv.org/abs/2110.06696)
## Usage
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("Langboat/mengzi-bert-base")
model = BertModel.from_pretrained("Langboat/mengzi-bert-base")
```
## Scores on nine chinese tasks (without any data augmentation)
| Model | AFQMC | TNEWS | IFLYTEK | CMNLI | WSC | CSL | CMRC2018 | C3 | CHID |
|-|-|-|-|-|-|-|-|-|-|
|RoBERTa-wwm-ext| 74.30 | 57.51 | 60.80 | 80.70 | 67.20 | 80.67 | 77.59 | 67.06 | 83.78 |
|Mengzi-BERT-base| 74.58 | 57.97 | 60.68 | 82.12 | 87.50 | 85.40 | 78.54 | 71.70 | 84.16 |
RoBERTa-wwm-ext scores are from CLUE baseline
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
joyebright/Top1-with-without-mixing | joyebright | 2021-10-14T08:56:42Z | 0 | 0 | null | [
"translation",
"en",
"fr",
"dataset:wmt",
"dataset:iwslt2014",
"license:apache-2.0",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- en
- fr
tags:
- translation
license: apache-2.0
datasets:
- wmt
- iwslt2014
metrics:
- bleu
- ter
- chrf2
- sacrebleu
---
|
joyebright/Top6-without-mixing | joyebright | 2021-10-14T08:55:56Z | 0 | 0 | null | [
"translation",
"en",
"fr",
"dataset:wmt",
"dataset:iwslt2014",
"license:apache-2.0",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- en
- fr
tags:
- translation
license: apache-2.0
datasets:
- wmt
- iwslt2014
metrics:
- bleu
- ter
- chrf2
- sacrebleu
---
|
huggingtweets/sciencebits | huggingtweets | 2021-10-14T08:42:39Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/sciencebits/1634200955730/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1340996475472494593/yqCQjZ06_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Science Bits</div>
<div style="text-align: center; font-size: 14px;">@sciencebits</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Science Bits.
| Data | Science Bits |
| --- | --- |
| Tweets downloaded | 2741 |
| Retweets | 759 |
| Short tweets | 47 |
| Tweets kept | 1935 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/22jxh8wi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sciencebits's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/h0qt4tsw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/h0qt4tsw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sciencebits')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
emekaboris/autonlp-txc-17923124 | emekaboris | 2021-10-14T07:56:17Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:emekaboris/autonlp-data-txc",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- emekaboris/autonlp-data-txc
co2_eq_emissions: 133.57087522185148
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 17923124
- CO2 Emissions (in grams): 133.57087522185148
## Validation Metrics
- Loss: 0.2080804407596588
- Accuracy: 0.9325402190077058
- Macro F1: 0.7283811287183823
- Micro F1: 0.9325402190077058
- Weighted F1: 0.9315711955594153
- Macro Precision: 0.8106599661500661
- Micro Precision: 0.9325402190077058
- Weighted Precision: 0.9324644116921059
- Macro Recall: 0.7020515544343829
- Micro Recall: 0.9325402190077058
- Weighted Recall: 0.9325402190077058
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/emekaboris/autonlp-txc-17923124
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emekaboris/autonlp-txc-17923124", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("emekaboris/autonlp-txc-17923124", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
THUMT/mGPT | THUMT | 2021-10-14T05:49:41Z | 245 | 6 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2110.06609",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z |
# mGPT
mGPT is pre-trained on the [mC4 dataset](https://huggingface.co/datasets/mc4) using a causal language modeling objective. It was introduced in this [paper](https://arxiv.org/abs/2110.06609) and first released on this page.
## Model description
mGPT is a Transformer-based model which pre-trained on massive multilingual data covering over 101 languages. Similar to GPT-2, It was pre-trained on the raw texts only, with no human labeling. We use the same tokenization and vocabulary as the [mT5 model](https://huggingface.co/google/mt5-base).
## Intended uses
You can use the raw model for text generation or using prompts for adapting it to a downstream task.
## How to use
You can use this model directly with a pipeline for text generation. Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import MT5Tokenizer, GPT2LMHeadModel, TextGenerationPipeline
tokenizer = MT5Tokenizer.from_pretrained("THUMT/mGPT")
model = GPT2LMHeadModel.from_pretrained("THUMT/mGPT")
pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer)
text = "Replace me by any text you'd like."
text = pipeline(text, do_sample=True, max_length=1024)[0]["generated_text"]
```
## Preprocessing
The texts are tokenized using `sentencepiece` and a vocabulary size of 250,100. The inputs are sequences of 1,024 consecutive tokens. We use `<extra_id_0>` to separate lines in a document.
## BibTeX entry and citation info
```bibtex
@misc{tan2021msp,
title={MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators},
author={Zhixing Tan and Xiangwen Zhang and Shuo Wang and Yang Liu},
year={2021},
eprint={2110.06609},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Langboat/mengzi-oscar-base | Langboat | 2021-10-14T02:17:53Z | 42 | 5 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language:
- zh
license: apache-2.0
---
# Mengzi-oscar-base (Chinese Multi-modal pre-training model)
Mengzi-oscar is trained based on the Multi-modal pre-training model [Oscar](https://github.com/microsoft/Oscar), and is initialized using [Mengzi-Bert-Base](https://github.com/Langboat/Mengzi). 3.7M pairs of images and texts were used, including 0.7M Chinese image-caption pairs, 3M Chinese image-question pairs, a total of 0.22M different images.
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
## Usage
#### Installation
Check [INSTALL.md](https://github.com/microsoft/Oscar/blob/master/INSTALL.md) for installation instructions.
#### Pretrain & fine-tune
See the [Mengzi-Oscar.md](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md) for details.
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Langboat/mengzi-oscar-base-caption | Langboat | 2021-10-14T02:17:06Z | 13 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language:
- zh
license: apache-2.0
---
# Mengzi-oscar-base-caption (Chinese Multi-modal Image Caption model)
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
Mengzi-oscar-base-caption is fine-tuned based on Chinese multi-modal pre-training model [Mengzi-Oscar](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md), on AIC-ICC Chinese image caption dataset.
## Usage
#### Installation
Check [INSTALL.md](https://github.com/microsoft/Oscar/blob/master/INSTALL.md) for installation instructions.
#### Pretrain & fine-tune
See the [Mengzi-Oscar.md](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md) for details.
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
S34NtheGuy/DialoGPT-small-pikamew362 | S34NtheGuy | 2021-10-14T02:01:56Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
tags:
- conversational
---
# DialoGPT chat bot model using discord messages as data |
BigSalmon/SimplifyText | BigSalmon | 2021-10-14T00:41:11Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | - All credit goes to https://huggingface.co/philippelaban/keep_it_simple.
- This is a copy of their repository for future training purposes.
- It is supposed to simplify text.
- Their model card gives instructions on how to use it. |
athar/distilbert-base-uncased-finetuned-cola | athar | 2021-10-13T23:50:52Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5451837431775948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8508
- Matthews Correlation: 0.5452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5221 | 1.0 | 535 | 0.5370 | 0.4246 |
| 0.3462 | 2.0 | 1070 | 0.5157 | 0.5183 |
| 0.2332 | 3.0 | 1605 | 0.6324 | 0.5166 |
| 0.1661 | 4.0 | 2140 | 0.7616 | 0.5370 |
| 0.1263 | 5.0 | 2675 | 0.8508 | 0.5452 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.0
- Tokenizers 0.10.3
|
Craig/paraphrase-MiniLM-L6-v2 | Craig | 2021-10-13T15:01:15Z | 1,174 | 3 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | ---
pipeline_tag: feature-extraction
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/paraphrase-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This is a clone of the original model, with `pipeline_tag` metadata changed to `feature-extraction`, so it can just return the embedded vector. Otherwise it is unchanged.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L6-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
GKLMIP/roberta-hindi-romanized | GKLMIP | 2021-10-13T13:46:13Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Huang, Xixuan
and Lin, Nankai
and Li, Kexin
and Wang, Lianxi
and Gan SuiFu",
title="HinPLMs: Pre-trained Language Models for Hindi",
booktitle="The International Conference on Asian Language Processing",
year="2021",
publisher="IEEE Xplore"
}
``` |
pucpr/clinicalnerpt-chemical | pucpr | 2021-10-13T09:33:30Z | 5 | 5 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language: "pt"
widget:
- text: "Dispneia venoso central em subclavia D duplolumen recebendo solução salina e glicosada em BI."
- text: "Paciente com Sepse pulmonar em D8 tazocin (paciente não recebeu por 2 dias Atb)."
- text: "FOI REALIZADO CURSO DE ATB COM LEVOFLOXACINA POR 7 DIAS."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Chemical & Drugs
The Chemical&Drugs NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
pucpr/clinicalnerpt-healthcare | pucpr | 2021-10-13T09:32:28Z | 6 | 6 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language: "pt"
widget:
- text: "Acompanhamento da diabetes, paciente encaminhado da unidade de saúde."
- text: "Paciente encaminhado por alteração na função renal."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - HealthCare
The HealthCare NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
pucpr/clinicalnerpt-laboratory | pucpr | 2021-10-13T09:32:17Z | 5 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language: "pt"
widget:
- text: "Exame de creatinina urinaria: 41, 8 mg/dL."
- text: "Parcial de urina com 150mg/dL de priteinas, ph de 5,0 e 1034 leucocitos."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Laboratory
The Laboratory NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
pucpr/clinicalnerpt-medical | pucpr | 2021-10-13T09:28:28Z | 150 | 6 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language: "pt"
widget:
- text: "Hoje realizou avaliacao de mp-cdi, com eletrodos atrial e ventricular."
- text: "Paciente encaminhado a câmera hiperbárica no período da tarde."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Medical
The Medical NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
Fujitsu/pytorrent | Fujitsu | 2021-10-12T18:37:18Z | 14 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"roberta",
"feature-extraction",
"en",
"dataset:pytorrent",
"arxiv:2110.01710",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | ---
license: mit
widget:
language:
- en
datasets:
- pytorrent
---
# 🔥 RoBERTa-MLM-based PyTorrent 1M 🔥
Pretrained weights based on [PyTorrent Dataset](https://github.com/fla-sil/PyTorrent) which is a curated data from a large official Python packages.
We use PyTorrent dataset to train a preliminary DistilBERT-Masked Language Modeling(MLM) model from scratch. The trained model, along with the dataset, aims to help researchers to easily and efficiently work on a large dataset of Python packages using only 5 lines of codes to load the transformer-based model. We use 1M raw Python scripts of PyTorrent that includes 12,350,000 LOC to train the model. We also train a byte-level Byte-pair encoding (BPE) tokenizer that includes 56,000 tokens, which is truncated LOC with the length of 50 to save computation resources.
### Training Objective
This model is trained with a Masked Language Model (MLM) objective.
## How to use the model?
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Fujitsu/pytorrent")
model = AutoModel.from_pretrained("Fujitsu/pytorrent")
```
## Citation
Preprint: [https://arxiv.org/pdf/2110.01710.pdf](https://arxiv.org/pdf/2110.01710.pdf)
```
@misc{bahrami2021pytorrent,
title={PyTorrent: A Python Library Corpus for Large-scale Language Models},
author={Mehdi Bahrami and N. C. Shrikanth and Shade Ruangwan and Lei Liu and Yuji Mizobuchi and Masahiro Fukuyori and Wei-Peng Chen and Kazuki Munakata and Tim Menzies},
year={2021},
eprint={2110.01710},
archivePrefix={arXiv},
primaryClass={cs.SE},
howpublished={https://arxiv.org/pdf/2110.01710},
}
```
|
S34NtheGuy/DialoGPT-small-Harry282 | S34NtheGuy | 2021-10-12T17:21:19Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
tags:
- conversational
---
# DialoGPT chat bot model using discord messages as data |
m3hrdadfi/xlmr-large-qa-sv | m3hrdadfi | 2021-10-12T13:50:27Z | 9 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"question-answering",
"roberta",
"squad",
"sv",
"multilingual",
"model-index",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language:
- sv
- multilingual
tags:
- question-answering
- xlm-roberta
- roberta
- squad
metrics:
- squad_v2
widget:
- text: Vilket datum är den svenska nationaldagen?
context: >-
Sveriges nationaldag och svenska flaggans dag firas den 6 juni varje år
och är en helgdag i Sverige. Tidigare firades 6 juni enbart som "svenska
flaggans dag" och det var först 1983 som dagen även fick status som
nationaldag.
- text: Vad innebär helgdag i Sverige?
context: >-
Sveriges nationaldag och svenska flaggans dag firas den 6 juni varje år
och är en helgdag i Sverige. Tidigare firades 6 juni enbart som "svenska
flaggans dag" och det var först 1983 som dagen även fick status som
nationaldag.
- text: Vilket år tillkom Sveriges nationaldag?
context: >-
Sveriges nationaldag och svenska flaggans dag firas den 6 juni varje år
och är en helgdag i Sverige. Tidigare firades 6 juni enbart som "svenska
flaggans dag" och det var först 1983 som dagen även fick status som
nationaldag.
model-index:
- name: "XLM-RoBERTa large for QA (SwedishQA - \U0001F1F8\U0001F1EA)"
results:
- task:
type: question-answering
name: Question Answering
dataset:
type: swedish_qa
name: SwedishQA
args: sv
metrics:
- type: squad_v2
value: 87.97
name: Eval F1
args: max_order
- type: squad_v2
value: 78.79
name: Eval Exact
args: max_order
---
# XLM-RoBERTa large for QA (SwedishQA - 🇸🇪)
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the [SwedishQA](https://github.com/Vottivott/building-a-swedish-qa-model) dataset.
## Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
- mixed_precision_training: Native AMP
## Performance
Evaluation results on the eval set with the official [eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
### Evalset
```text
"exact": 78.79554655870446,
"f1": 87.97339064752278,
"total": 5928
```
## Usage
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name_or_path = "m3hrdadfi/xlmr-large-qa-sv"
nlp = pipeline('question-answering', model=model_name_or_path, tokenizer=model_name_or_path)
context = """
Sveriges nationaldag och svenska flaggans dag firas den 6 juni
varje år och är en helgdag i Sverige.
Tidigare firades 6 juni enbart som "svenska flaggans dag" och det
var först 1983 som dagen även fick status som nationaldag.
"""
questions = [
"Vilket datum är den svenska nationaldagen?",
"Vad innebär helgdag i Sverige?",
"Vilket år tillkom Sveriges nationaldag?"
]
kwargs = {}
for question in questions:
r = nlp(question=question, context=context, **kwargs)
answer = " ".join([token.strip() for token in r["answer"].strip().split() if token.strip()])
print(f"{question} {answer}")
```
**Output**
```text
Vilket datum är den svenska nationaldagen? 6 juni
Vad innebär helgdag i Sverige? svenska flaggans dag
Vilket år tillkom Sveriges nationaldag? 1983
```
## Authors
- [Mehrdad Farahani](https://github.com/m3hrdadfi)
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
espejelomar/fastai-pet-breeds-classification | espejelomar | 2021-10-12T13:01:26Z | 44 | 4 | fastai | [
"fastai",
"image-classification",
"arxiv:1512.03385",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
tags:
- image-classification
- fastai
library_name: fastai
datasets:
- Oxford-IIIT Pet Dataset
- ImageNet
---
## Pet breeds classification model
Finetuned model on The Oxford-IIIT Pet Dataset. It was introduced in
[this paper](https://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/) and first released in
[this webpage](https://www.robots.ox.ac.uk/~vgg/data/pets/).
The pretrained model was trained on the ImageNet dataset, a dataset that has 100,000+ images across 200 different classes. It was introduced in [this paper](https://image-net.org/static_files/papers/imagenet_cvpr09.pdf) and available [in this webpage](https://image-net.org/download.php)
Disclaimer: The model was fine-tuned after [Chapter 5](https://github.com/fastai/fastbook/blob/master/05_pet_breeds.ipynb) of [Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020)](https://github.com/fastai/fastbook) written by Jeremy Howard and Sylvain Gugger.
## Model description
The model was finetuned using the `cnn_learner` method of the fastai library suing a Resnet 34 backbone pretrained on the ImageNet dataset. The fastai library uses PyTorch for the undelying operations. `cnn_learner` automatically gets a pretrained model from a given architecture with a custom head that is suitable for the target data.
Resnet34 is a 34 layer convolutional neural network. It takes residuals from each layer and uses them in the subsequent connected layers. Advantages of a resnet arquitecture ([Neurohive, 2019](https://neurohive.io/en/popular-networks/resnet/)):
- Are easy to optimize, but the “plain” networks (that simply stack layers) shows higher training error when the depth increases.
- Can easily gain accuracy from greatly increased depth, producing results which are better than previous networks.
Please refer to the original paper '[Deep Residual Learning for Image Recognition](https://arxiv.org/pdf/1512.03385.pdf)' written by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun.
Specifically, the model was obtained:
```
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(2)
```
## How to use
Download the model this way:
```python
from huggingface_hub import hf_hub_download
from fastai.learner import load_learner
model = load_learner(
hf_hub_download('espejelomar/fastai-pet-breeds-classification', filename="model.pkl")
)
```
Then you can use your downloaded fastai model in any way you want. For example, if the input is a PIL Image, with the following code you can obtain the resulting outputs for each class:
```python
_, _, preds = self.model.predict(np.array(inputs))
```
## Training data
The Resnet34 model was pretrained on [ImageNet](https://image-net.org/static_files/papers/imagenet_cvpr09.pdf), a dataset that has 100,000+ images across 200 different classes, and fine-tuned on [The Oxford-IIIT Pet Dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/).
## Preprocessing
For more detailed information on the preprocessing procedure, refer to the [Chapter 5](https://github.com/fastai/fastbook/blob/master/05_pet_breeds.ipynb) of [Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020)](https://github.com/fastai/fastbook).
Two main strategies are followed to presizing the images:
- Resize images to relatively "large" dimensions—that is, dimensions significantly larger than the target training dimensions.
- Compose all of the common augmentation operations (including a resize to the final target size) into one, and perform the combined operation on the GPU only once at the end of processing, rather than performing the operations individually and interpolating multiple times.
"The first step, the resize, creates images large enough that they have spare margin to allow further augmentation transforms on their inner regions without creating empty zones. This transformation works by resizing to a square, using a large crop size. On the training set, the crop area is chosen randomly, and the size of the crop is selected to cover the entire width or height of the image, whichever is smaller.
In the second step, the GPU is used for all data augmentation, and all of the potentially destructive operations are done together, with a single interpolation at the end." ([Howard and Gugger, 2020](https://github.com/fastai/fastbook))
Specifically, the following code is used for preprocessing:
```python
#hide_input
#id interpolations
#caption A comparison of fastai's data augmentation strategy (left) and the traditional approach (right).
dblock1 = DataBlock(blocks=(ImageBlock(), CategoryBlock()),
get_y=parent_label,
item_tfms=Resize(460))
# Place an image in the 'images/grizzly.jpg' subfolder where this notebook is located before running this
dls1 = dblock1.dataloaders([(Path.cwd()/'images'/'grizzly.jpg')]*100, bs=8)
dls1.train.get_idxs = lambda: Inf.ones
x,y = dls1.valid.one_batch()
_,axs = subplots(1, 2)
x1 = TensorImage(x.clone())
x1 = x1.affine_coord(sz=224)
x1 = x1.rotate(draw=30, p=1.)
x1 = x1.zoom(draw=1.2, p=1.)
x1 = x1.warp(draw_x=-0.2, draw_y=0.2, p=1.)
tfms = setup_aug_tfms([Rotate(draw=30, p=1, size=224), Zoom(draw=1.2, p=1., size=224),
Warp(draw_x=-0.2, draw_y=0.2, p=1., size=224)])
x = Pipeline(tfms)(x)
#x.affine_coord(coord_tfm=coord_tfm, sz=size, mode=mode, pad_mode=pad_mode)
TensorImage(x[0]).show(ctx=axs[0])
TensorImage(x1[0]).show(ctx=axs[1]);
```
### BibTeX entry and citation info
```bibtex
@book{howard2020deep,
author = {Howard, J. and Gugger, S.},
title = {Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD},
isbn = {9781492045526},
year = {2020},
url = {https://books.google.no/books?id=xd6LxgEACAAJ},
publisher = {O'Reilly Media, Incorporated},
}
```
|
V3RX2000/distilbert-base-uncased-finetuned-squad | V3RX2000 | 2021-10-12T04:47:10Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2246 | 1.0 | 5533 | 1.1484 |
| 0.9433 | 2.0 | 11066 | 1.1294 |
| 0.7625 | 3.0 | 16599 | 1.1580 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
V3RX2000/distilbert-base-uncased-finetuned-cola | V3RX2000 | 2021-10-12T02:10:11Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5396261051709696
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8107
- Matthews Correlation: 0.5396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5261 | 1.0 | 535 | 0.5509 | 0.3827 |
| 0.3498 | 2.0 | 1070 | 0.4936 | 0.5295 |
| 0.2369 | 3.0 | 1605 | 0.6505 | 0.5248 |
| 0.1637 | 4.0 | 2140 | 0.8107 | 0.5396 |
| 0.1299 | 5.0 | 2675 | 0.8738 | 0.5387 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
lighteternal/stsb-xlm-r-greek-transfer | lighteternal | 2021-10-11T21:16:05Z | 184 | 6 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"el",
"arxiv:2004.09813",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
language:
- en
- el
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
widget:
- source_sentence: "Το κινητό έπεσε και έσπασε."
sentences: [
"H πτώση κατέστρεψε τη συσκευή.",
"Το αυτοκίνητο έσπασε στα δυο.",
"Ο υπουργός έπεσε και έσπασε το πόδι του."
]
pipeline_tag: sentence-similarity
license: apache-2.0
---
# Semantic Textual Similarity for the Greek language using Transformers and Transfer Learning
### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
We follow a Teacher-Student transfer learning approach described [here](https://www.sbert.net/examples/training/multilingual/README.html) to train an XLM-Roberta-base model on STS using parallel EN-EL sentence pairs.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('{MODEL_NAME}')
sentences1 = ['Το κινητό έπεσε και έσπασε.',
'Το κινητό έπεσε και έσπασε.',
'Το κινητό έπεσε και έσπασε.']
sentences2 = ["H πτώση κατέστρεψε τη συσκευή.",
"Το αυτοκίνητο έσπασε στα δυο.",
"Ο υπουργός έπεσε και έσπασε το πόδι του."]
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)
#Compute cosine-similarities (clone repo for util functions)
from sentence_transformers import util
cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2)
#Output the pairs with their score
for i in range(len(sentences1)):
print("{} {} Score: {:.4f}".format(sentences1[i], sentences2[i], cosine_scores[i][i]))
#Outputs:
#Το κινητό έπεσε και έσπασε. H πτώση κατέστρεψε τη συσκευή. Score: 0.6741
#Το κινητό έπεσε και έσπασε. Το αυτοκίνητο έσπασε στα δυο. Score: 0.5067
#Το κινητό έπεσε και έσπασε. Ο υπουργός έπεσε και έσπασε το πόδι του. Score: 0.4548
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained(
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
#### Similarity Evaluation on STS.en-el.txt (translated manually for evaluation purposes)
We measure the semantic textual similarity (STS) between sentence pairs in different languages:
| cosine_pearson | cosine_spearman | euclidean_pearson | euclidean_spearman | manhattan_pearson | manhattan_spearman | dot_pearson | dot_spearman |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
0.834474802920369 | 0.845687403828107 | 0.815895882192263 | 0.81084300966291 | 0.816333562677654 | 0.813879742416394 | 0.7945167996031 | 0.802604238383742 |
#### Translation
We measure the translation accuracy. Given a list with source sentences, for example, 1000 English sentences. And a list with matching target (translated) sentences, for example, 1000 Greek sentences. For each sentence pair, we check if their embeddings are the closest using cosine similarity. I.e., for each src_sentences[i] we check if trg_sentences[i] has the highest similarity out of all target sentences. If this is the case, we have a hit, otherwise an error. This evaluator reports accuracy (higher = better).
| src2trg | trg2src |
| ----------- | ----------- |
| 0.981 | 0.9775 |
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 135121 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 400, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
## Citing & Authors
Citation info for Greek model: TBD
Based on the transfer learning approach of [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)
|
ismaelardo/BETO_3d | ismaelardo | 2021-10-11T18:50:46Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | Este es el primer modelo de prueba BETO_3D |
lincoln/barthez-squadFR-fquad-piaf-question-generation | lincoln | 2021-10-11T15:24:58Z | 425 | 4 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"seq2seq",
"barthez",
"fr",
"dataset:squadFR",
"dataset:fquad",
"dataset:piaf",
"arxiv:2010.12321",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- fr
license: mit
pipeline_tag: "text2text-generation"
datasets:
- squadFR
- fquad
- piaf
metrics:
- bleu
- rouge
widget:
- text: "La science des données est un domaine interdisciplinaire qui utilise des méthodes, des processus, des algorithmes et des systèmes scientifiques pour extraire des connaissances et des idées de nombreuses données structurelles et non structurées.\
Elle est souvent associée aux <hl>données massives et à l'analyse des données<hl>."
tags:
- seq2seq
- barthez
---
# Génération de question à partir d'un contexte
Le modèle est _fine tuné_ à partir du modèle [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) afin de générer des questions à partir d'un paragraphe et d'une suite de token. La suite de token représente la réponse sur laquelle la question est basée.
Input: _Les projecteurs peuvent être utilisées pour \<hl\>illuminer\<hl\> des terrains de jeu extérieurs_
Output: _À quoi servent les projecteurs sur les terrains de jeu extérieurs?_
## Données d'apprentissage
La base d'entrainement est la concatenation des bases SquadFR, [fquad](https://huggingface.co/datasets/fquad), [piaf](https://huggingface.co/datasets/piaf). L'input est le context et nous avons entouré à l'aide du token spécial **\<hl\>** les réponses.
Volumétrie (nombre de triplet contexte/réponse/question):
* train: 98 211
* test: 12 277
* valid: 12 776
## Entrainement
L'apprentissage s'est effectué sur une carte Tesla V100.
* Batch size: 20
* Weight decay: 0.01
* Learning rate: 3x10-5 (décroit linéairement)
* < 24h d'entrainement
* Paramètres par défaut de la classe [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments)
* Total steps: 56 000
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAj0AAAGOCAYAAAB8J7JHAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAEKXSURBVHhe7d1/sB11fcf/zHSmihVDKagFpqQDrVUcQptaf9Q2acf6ox1M1NKpVkpGWrRaJ3FqW6f/JPX7HR0pNSkWKUMRqEz6bRSj8kMRh8QREWyQYtFCgw0Uwy8hKYIKWrvf+9y77+Rz9+45d2/u+bHnfJ6Pmc/cuz/Ont09e3Zf57Of3V1WSJIkZcDQI0mSsmDokSRJWTD0SJKkLBh6JElSFgw9kiQpC4YeSZKUBUOPJEnKgqFHkiRlwdAjSZKyYOiRJElZMPRIkqQsGHokSVIWDD2SJCkLhh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JGm0I9+9KPioosuKu64446qz9Lt3Lmz2Lx5c/l31B5//PHi0ksvLd+fMizDnn4bXZiHQZmmZdF0MPRo4nEQXrZs2VgOxl31gx/8oFwnf//3f1/1aS/CTd041/OqVauKFStWFGvWrCnLsAx7+qlNmzYVe/furboO4f2nJSiwvRh61CWGHk08Q898Swk9HIx5bV2EoVGv50996lPl/Hz5y1+u+gzPKENPr212mkIPy2HoUZcYejTxFht6brvttuLAgQNVVzPG2bVrV9U1H7/QGd70S30hgwwNzGeTYYSehfSalzrWWdtxwUFzoflpOz3G6zduPfTwWX3rW9+quuZjO2I7WGh7Ckwvtpm2oYdxHn744aqrGdM8nG2xrcVMP11GqWsMPZp47GTbhJ44mEc57bTT5u2cN2zYUBx11FFzxjvrrLOqobMHTU6zpMPpXggHxXXr1s15Hd1xsGQ+6Ee7lbqY7xiXv+vXr58zLQ6U6YG3KfSsXr26LHX04z1QX0dR0Gs9s87ScZvWK/0ZL10HrOcdO3ZUYzRjuWL8KPH+vAfvlQ6r1yrEPKefW9M6CBF6tm3bNudzfte73lWNMatpO2Be6oEq1nm6Xtme0tdFieWK0EM5+uijDw4///zzy+Eptpd0e+X/9PPp9XlS4jPvh2nVp1//zJgOy5jOS0yb/+ufiTROhh5NvDiwpTv7utj5b9mypQwHHJxWrlxZHrgiLNAvxgn0S3fyy5cvLw9a8RoOvE1Bpe7EE08s3y/mkb/0IwQEDhwcOOsYLw1e/M98xPsyf3RzsAyHG3oQ66quaT3X1yvDYr2mGIcDIuOzziiMR780rNUxPQ6a8b5RkK5TpsE8MF66LAyjH+OyvhivHshSEXpOOeWU4pZbbikeffTRg++fHrzjc495T7enFOuWZUznM94/lqmO9z/99NOLjRs3ltNlHPr9xE/8xJx55/2ZRqx7SgSqGI+/vD4tzBPjMO1+4vvANJkOJabPdALrO5Yxvivx/oxr6FGXGHo08dgB13fEdRFWUrFTj5ATB/A4kDVh+EK1E3VxcIoDQWA6af+m8WLZ0oMJ3emBHfHaGG8UoYf1RHd9vcZ4zFOgu/7e9en1EqEjFQGn/to4KId4j/r66oVwwfgf+chHqj6zTj311OK4444rnnzyyarPfLE9pfMUAaP+2aM+bmAenvvc5855zfbt28vx+RvYpqk9qyPgNfVHbCfpZ9PL2rVry/eoY/oMC7G99FpGQ4+6xNCjiRcHtqYDCJoORiENQxEo+LW+devWxl/CUTvBr/BPfvKTVd/+4pQZO/+0MI10vggRzE96gGbeOMiEfstK/3jtKEJP23lBvRsRmppen2JdMV5qEPPYJEJPvQ3NO97xjrL/N7/5zarP7LT5DHlNFMZJQ3GvdY5e88V0OH1Zx/isC8S2ynjpNkVh+216z1gX6efw3e9+t7j++uvnFLYdsK2n4SbU1z3dTeEI6TxLXWDo0cRb6MDWb3j9oETQIWiwE+c1HEDSgxgHakIMQYThbXbqTJ/pxXvVSxqueO84RcJ7EZbSX+1Rw8GwOvrHAW3coad+wEznLdXr9SnWb31+mOemU4G95rGtCC919enG58DnxXKxjcQ46XL2WudIp5fi/Zu2KcaP/vFerOd4j7TUa3rYxtiWIuCHPXv2lNNJy/79+8th/N/0mdW3D7p5zyaMZ+hRlxh6NPHqB6S6+FWchpdA/16nAjhQRM1OE6YbB4B+pwuipqeNdFniVER62qDXskatSRykFhN66rVL9YNaqL93dPdar+k0690hnV4vTaGn1zwyL+k0Yx7bitqaXjU99957b9nddAqJ7YVx0uXstc6RzmeqTeiJbbrNaSq2DYI023Ld//3f/xU//OEP55TQa95Z7rRmh+Xtt4yGHnWJoUcTLw5s/Q6eHKTqv+Djdf0OHHEQTYNHHcObDughptEUDpowr/wi50BSP1AxHxxw6r/Yo+Yhao2aQg/zGLVIIdZBOv8Rtuqa1nPTvMTr0+Wtv0eoT69JU+iJdVr/7Fhn6QE55rmtCD1NbXq4QWJgnPry0F3v3ys4gHGjPVmqTehB0zZdR+ChRoztqKl2sB8+V8J6+jr+r9cYsbyGHk0KQ48mXhzYmto3xA43DpJcLcV9VWizw847DRUcgJgGbXUYh79xwADvw0GG1zKcEpdg9wtF4CDBeLQBidcynaZTNBxEmDfGbwpkcXCNabGMdKc1D02hJ9ZTug44cFLSA3XUIjBeug7j9fwNEXDq67V+EGSc9D1CfXpNYvnqeA/eKz6P+CzSsBXz3BafL+U5z3lOeYn4VVddVZx55pnlNHisR4hAcNlll5XvzWfBdsJ46XL2Cz2MTwiNdRzbUNvQE8vG+DEf/GUbjnlgm2Cc9LOMstB6J+AQINlGmW58H+iXbu+8l6FHk8LQo4lH7UYcXJpK4GBINzttDjgcENJfsRwE4ooVdtZR4xLjsKOnOw5ujMf4Cx08AqGK9+e1FP5vCgK8T8x7On8pwkZMi/mpT4fTFL/+679efOITn6j6zOJ1Mf+8nnXHeqiHq1gXMR/RjwNsfXljvca8pOErMLwpwMU89MNBk/dtwnuly5MGHsS20RbvQ/n6179enHHGGeV0qeW55JJLqjFm8bmwLcS2wrqKzy1dTuavaX2A8RnGa9L1wPs3BYWm/kyD92ZbZT5im41ppdOvl6bPoy6dfmzv9c+L6fRaxl7LIo2LoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPZIkKQuGHkmSlAVDT4Mbb7yxOPLII4uXvvSlFovFYrFkUZ71rGfNuaHpNDL0NLj11lvLu61ec801FovFYrFkUV74wheWd9+eZoaeBv/xH/9R3oZekqRccBd3HjcyzQw9DQw9kqTcGHoyZeiRJOXG0JMpQ48kKTeGnkwZeiRp8b773e9aOlwWYujJlKFHktr54Q9/WOzbt6+48847i2984xuWDpf//M//LB544IHqk5vP0JMpQ48ktfPII48Ud911V/Hoo48WP/jBD8oQZOleeeqpp8rPivDz+OOPV5/eXIaeTBl6JKmde+65p7j//vurLnXd3r17e35ehp5MGXokqR1qeQ4cOFB1qeseeuihMvg0MfRkytAjSe0YeiYLoYfauSaGnkwZeiSpnUkLPRz0//Vf/7Xqyo+hR/MMOvRceunMip5Z02vXVj0kaUpMWuj56Ec/WrzgBS+ouvJj6NE8gw49O3fOhp7Vq6sekjQlDD2TxdCjeQw9ktQO9+eZ9NBDw97NmzeXZSc77JpLL7304HCeQp4ub31Y1xl6NI+hR5LamfTQs2PHjuKoo44qNmzYUGzatKlYvnx5+X/g/5UrV5bDKGvXrj0YjFbP7NTrw7rO0KN5Bh16brttNvSsWFH1kKQp0XR666UvHW15zWuqN26hHnpOPPHEMrCE22Z22Mtmdtj8Bf831f6g37CuMvRonkGHHhB6KJI0TZpCz5FHHtrnjaIcbuhhvtOAE6i92bJlS/k/tTkrZn6xbt26tXG80047rXFYVxl6NI+hR5LaaQo9t9xSFDfdNLryjW9Ub9xCGnqiVqc+/wSdqP1hGAGIU1ec+iLkxPgxjPGZzpo1a+ZNq2sMPZrH0CNJ7Uz61VuEFdr1pAg39X5gOeunw0IMixqirjL0aJ5hhp4J2jdI0oImPfScddZZc2pv6Ca8RDdXZ8X/XOXFqS76oWlYU1jqEkOP5hlG6OHKLULPhLV5k6S+Ji30XHnllXNCD/POFVrU+FA4VZW2z4lTV1HSK7to0xP961d9dZWhR/MYeiSpnUkLPbkz9GgeQ48ktWPomSyGHs1j6JGkdgw9k8XQo3mGEXo41UvoaWj0L0kTy9AzWQw9E4DLA2ldHw3GFoMW9dxifDGvG0boIewYeiRNG0PPZCH0cFxsYujpCC4LpHD/g8WGHlreRwv7tgw9ktSOoWeyGHomCM84WUx4iTtlUlNk6JGkwTP0TBZDzwRZTOjhQ+UGU/ztQujhXlbMwgQ8hFeSWjP0TBZDzwRZTOihhiduB75Q6Ln++uuLZz/72QfLT/3UT5U3mhokrtpiFriKS5KmhaFnshh6Jkjb0BOntcJCoeeJJ54o7rzzzoPl2muvLcPPIBl6JE0jQ89crIvf/d3fLfbv31/16Y3xPvGJT1Rdo2HomSBtQw+Bh2eg8MRbCv/zOv5v8/j/YZzeMvRImkaGnrnuv//+8nizb9++qk9vr3zlK4v3ve99VddoEHq8ZH1CtA09XOlF7U4UQhCv4/9eCTc1jNDD2zLrM/lLkqaGoWcuQ0+3TUToIexs3ry5WL9+fbkx8T8l7Nq1q3j6059e3HvvvVWfuRY6vVU3jNADZmERsyFJnTeJoYdjCjX/HBc4E7B169ZqyOxT1utPSmf8tdVVKPxwXrduXflaCveQY3hYSuhJp818bdy4sRoyi2NZ3HeOv+kDTvsNSxl6JgA1N9TW1Ev493//9+J1r3td8fDDD1d95orXt2XokaR2GkPPrbcWxVe+MroyMw9t0cSBYMBxAQQWrvSNbsICgShF4IkQQTBJQxFtSLnwJdbB4YYeXk/QIXTxf8wXYQbMN+8TASvGQYwbZzLSYXWGHs1j6JGkdhpDz5FHHtrhjaK85jXVGy8sDTAhvfiF4EBoSQME3U3tQRmHMw2ElQhChxt6CF0ElxT9qLUBIYb3iflK8d4Mm/c5NDD0aJ5hhR6uguf72WK7lKSJ0Bh6XvSi0ZZFhB7u0E9AiAtdKJyiol9gnKhhIRDRHVhWXkOtS5x14P8Y/3BDD6+vn5Eg6DCtWL8ENrqZX5p4RH/+8lqGMW/psDpDj+YZVuhheyb09Kh1lKSJM2lteggHcQ+3XqhhiRBE4EnHp5Yo2veENCQNI/SkqOlh/hg3DWroNywYejSPoUeS2pm00ENooaakH5aHsEHY4W+6fASKCDggaDDOUkNPtDVKT18xzfoprxDvm44fmsJSMPRoHkOPJLXDDV0nKfQwr9TMxGkgClcG12t/aFBMcKjX6kQ7G17HVV9MK21wfLihB7xXnLriyi2mE22FmD7zybDLLrusvMorTrvFMOaHwrLFsDpDj+Yx9EhSO5MWegIhh1ofCv/Xa0yoeSFMNDVgJvgQihjO6+iOq6W4w/+5555bPP7442V3P9u3by9uvvnmqmsW02Ke6u/N+9TnOdZ7v2F1hh7NM6zQM7MNl6GHv5I0DSY19OTK0KN5DD2S1I6hp7cbbrih+PjHP95Y2tQEDYOhR/MYeiSpHUNPb29961t7lgcffLAaa7QMPZrH0CNJ7Rh6JouhR/MMK/TQCJ/QU7sYQJImlqFnshh6NM+wQg+N+wk9XMUlSdNgz549xbe//e2qS133rW99q+fDuQ09mTL0SFI7tE35r//6r+J///d/qz7qqqeeeqo8vu3fv7/qM5ehJ1OGHklqh/vSsM/8xje+UdYgWLpZOKXFZ8QdtH/4wx9Wn95chp5MDSv0cO8rQk+PR6JI0kT60Y9+VDz22GPFAw880Fi4S/GgStP0R1ma5mmUpWme2paFLpM39GRqWKEHhB6KJEldYujJlKFHkpQbQ0+mDD2SpNwYejI1zNCzfPls6PG2FpKkLjH0ZGqYoccnrUuSusjQkylDjyQpN4aeTBl6JEm5MfRkytAjScqNoSdTwww9PmldktRFhp5MGXokSbkx9GTK0CNJyo2hJ1OGHklSbgw9mRpm6NmxYzb0rF1b9ZAkqQMMPZkaZujhqi1CD1dxSZLUFYaeTBl6JEm5MfRkytAjScqNoSdTwww9e/fOhp4VK6oekiR1gKEnU8MMPSD0UCRJ6gpDT6YMPZKk3Bh6MmXokSTlxtCTqWGHnuXLZ0PPgQNVD0mSxszQ0xEHZtLBrl27is2bN5eljSuvvLI4++yzi9e97nXFhRdeWNx9993VkIUNO/T4pHVJUtcYejpi9UxKWL58ebFy5cqZsLDwLBOMTj755GL9+vXFxo0bi9NOO6183Y033liN0Z+hR5KUG0NPR1DTg50zKaFN6Ln22mur/2bdeuutxdOe9rTibW97W9WnP0OPJCk3hp6OaRt6mpx66qllzU8bhh5JUm4MPR1zuKHnuuuuK193ySWXVH3muu+++4pLL730YHnf+95XHHPMMdXQwfNJ65KkrjH0dMzhhp4Xv/jFfRtA7969u1i7du3B8pu/+ZvFT/7kT1ZDB8/QI0nqGkNPxxxO6DnllFNaX/EVhn16y9AjSeoaQ0/HLCb0PPjgg8WP//iPF6961auqPu0ZeiRJuTH0dEyv0EObnL/7u78rvvOd75TdXK11wgknFG95y1vK7sUadujZsWM29KxdW/WQJGnMDD0dsWnTpjLs1Eu44YYbyu577rmn7OZ0Vn1cypo1a8rhCxl26JnJbjPzM3sVlyRJXWDo6Yi9e/eWtTz1Eh577LHipptuKp588smyu2ncKG0YeiRJuTH0ZMrQI0nKjaEnU8MOPXv3zoaeFSuqHpIkjZmhJ1PDDj0g9FAkSeoCQ0+mDD2SpNwYejJl6JEk5cbQk6lRhJ7ly2dDT/UAeUmSxsrQk6lRhB6ftC5J6hJDT6YMPZKk3Bh6MmXokSTlxtCTKUOPJCk3hp5MjSL0+KR1SVKXGHoyZeiRJOXG0JMpQ48kKTeGnkwZeiRJuTH0ZGoUoWfHjtnQs3Zt1UOSpDEy9GRqFKGHq7YIPVzFJUnSuBl6MmXokSTlxtCTKUOPJCk3hp5MjSL07N07G3pWrKh6SJI0RoaeTI0i9IDQQ5EkadwMPZky9EiScmPoyZShR5KUG0NPpkYVepYvnw09Bw5UPSRJGhNDT6ZGFXp80rokqSsMPZky9EiScmPoyZShR5KUG0NPpgw9kqTcGHoyNarQ45PWJUldYejJlKFHkpQbQ0+mDD2SpNwYejJl6JEk5cbQk6lRhZ4dO2ZDz9q1VQ9JksbE0JOpUYUertoi9HAVlyRJ42ToyZShR5KUG0NPpgw9kqTcGHqmAAHm+9//ftXVzqhCz969s6FnxYqqhyRJY2Lo6YhLL720WLNmTXHUUUfNhIR2s3zHHXcUL3nJS8rxjzzyyGLz5s3VkIWNKvSAxWm5SJIkDY2hpyM2bdpUli1btrQOPS960YuKM844o9i3b19x8803l8Hnoosuqob2Z+iRJOXG0NMxO3fubBV6br/99nK82267repTFO985zuLVatWVV39GXokSbkx9HRM29Czffv24ogjjqi6ZsVrqflZyChDz/Lls6HnwIGqhyRJY2Do6Zi2oeeDH/xgceqpp1Zds+K1u3fvrvoc8oUvfKF4/vOff7CcdNJJZfuhUfBJ65KkLjD0dEzb0EPbn16h59/+7d+qPofs37+/+NKXvnSwbNu2rTj22GOrocNl6JEkdYGhp2Pahp4dO3b0PL317W9/u+rT2yhPbxl6JEldYOjpmLah57777ivH47L1wCXrr3jFK6qu/gw9kqTcGHo6Yu/evcWuXbuKrVu3lmGG/ymBU1LHH398GXbCS1/60vKSdV7L8Gc+85nFJZdcUg3tb5ShxyetS5K6wNDTEdyccPXq1fNK+OpXv1r82q/9WvHggw9WfWZvTviyl72sDDs0TO7qzQkNPZKkLjD0ZMrQI0nKjaFnSGibc6DDN6Yx9EiScmPoGQBOTXEJeVi5cmXZLof74BB+umiUoWfHjtnQs3Zt1UOSpDEw9AzA2pmjOZeQg7/Lly8va3kIQgzrolGGHnIfoSdpoiRJ0sgZegaABsdRo7Nhw4birLPOKv8n+Jx44onl/11j6JEk5cbQMwCEHJ6QDkIOp7vAw0ANPYYeSVI3GHoGgPvkcEqLdjy054kGzJ7emjWzesrQs2JF1UOSpDEw9AwIQad+xRY1PQSiLhpl6AGhhyJJ0rgYeoaA4MPdlLsaeGDokSTlxtAzAJzGisbLiEvWKXFVV9eMOvQsXz4beqrmTpIkjZyhZwC4estL1vtj9RB6jjqKmrCqpyRJI2ToGYB+l6wTgLpo1KEHXL1F8Fm3ruohSdIIGXoGIC5ZJ+SsWLHCS9Z7oIlTnObq6Fk/SdIUM/QMgJestzezSjzNJUkaC0PPANWfs+Ul683iNFfS9luSpKEz9AwYwYfL1aO2p6vGGXrS01y1nChJ0tAYegaENj08VT0uVaesX7++Gto94ww94KkdhB7u0uxpLknSKBh6BiAuU48GzKDGh/Y9XM3VReMOPZhZPWXw6egqkiRNGUPPAHD1Vhp4AsHntNNOq7q6pQuh57bbZkMPxdNckqRhM/QMAFdoNYUeGjJT29NFXQg9SE9zSZI0TIaeASDwcH8eQk7gqi1qeTy9tbA4zUXwscZHkjQshp4BIdxEA+Zo0Mydmrt6FVeXQg9ZkXs4Enwo3LG5o1f6S5ImmKFngKjdoVEzJa316aIuhZ7Aqa64lJ2bF27dWg2QJGkADD1DRENmanu6qIuhB9TwcBPrqPWhHbinvCRJg2DoGSJDz+Ej6KSnvDZurAZIknSYDD1DZOhZGppDxdVdlDVrvJGhJOnwGXqGyNAzGDSPirY+nO7qeHMpSVJHGXqWgFDDc7Z6la1bt+YTeqiC2bx59rzUEDD5uLSdRs5DehtJ0hQz9CxBXKLer2QTeuI8FOeghoTgw5PZeRtKw/0gJUnqydCTqaHU9ETL4yGnkbSdT4ef6SqpLfYfHIiGWFsswdCTqaG06SHskERG8EwJ3ira+QyxcknSsNA4j5tx8QWOXzFRuEMpQUgaMENPpobWkHlEtT1IGzhv2VL1lNRdfGmpnuWHURpyKDQF4LE96R1Kd+yoXigNhqEnU0MLPeykYoc1gl9q1ISP8O0kHS4CD1/UCDn8QKKRHvuM9MvLHUoJQDEeNUE+l0YDYujJ1NBCD2KHReObERjx20laLEJN1O7whW1z34n0HDZhyefSaAAMPZkaaugZcfVL+nb+IJQ6hn0AN9jiS8p9JxazT2Dc9Lk0BKfLLhvJfkXTydCTqaGGHoy4+iUuZeevpI5YSuBJ8csmfS4Nv3BoG+SVXlokQ0+H3H777cUll1xSXHTRRcX1119f9e3tO9/5zsx3fmdx3nnnFdu2bSvuuuuuasjChh56qL6OHdQIql94ixG+naQ2CCZ8KTlNNYjaGU55pe19KNT+cOrL2h+1YOjpiB07dpQhZP3MTuKcc86Z+S4vK7Zv314NbfaqV72qePWrX138+Z//eXHmmWeWryEEtTH00IMRV7/E23kJu7RE/HLYtavqOExp4GnThmcxmD+u9EprfyjW/GgBhp6OWDNzpN7Mjbkq/L9q1aqqa77HHnts5ju+rLjjjjuqPkXx2te+tpxOGyMJPSOufuGHXrR7dN8nHQa+pxFWKNSibNy4+NBCIOH1wwg8dVz9FbU/tP+R+jD0dMCTTz45831dVtb2hL0zOx/6XX311VWfuR599NFyeFobRM1Pp0IPYuc3ouqXETwNQ5o+/GJIww6lXotC2xxOI/X6AUO4oXaIceI1yT5tqJj/eM8R/MDS5DL0dMDdd989811dVuzZs6fqM4t+tO/pZcuWLcUb3/jG4mMf+1jx3ve+t3j5y19e3HDDDdXQuR566KHi05/+9MHy4Q9/uDjmmGOqoUM04uqX9O1Gtb+VJhZfGGqY0/vncJ44ggNBJr1hYBQCEL8smm4yGGUENyidI67y8k6l6sPQ0wG0wyHg1NEvPeVVd8UVVxTHH3/8zP7ntOLII48szj777DLcNLnpppvKUBSFU2dHsaMbhah+YUfJTnbI2NfyduyPJfXQL+w04VdENJxrKlydxWkmwsc4fnHwnsyHX3z1YejpgDiV1VTTc/nll1ddc9028wuM4buqxoYPP/xw8eY3v7l4wxveUHYvZGSnt5BWv7CTHUGNzwifhiFNnvhlQCGoLOY7yfeZ8SnDbq+zWPHFH8E+RpPJ0NMRBJhrrrmm6iqKffv2lf0+//nPV33mogao3n4naozaXME10tADfkGml5rSOHKItT7xo4+MNYLKJWmysO/gCzJtp4KiDeGIrhjV5DH0dMQZZ5wx51QW/3MaKjz44IPl6awnnnii7P7Upz41891eVuzevbvsRlzq/u1vf7vq09vIQ0+IU10UqqGH+IsszVjs41m9bOu2c1TW+M7xpaD2ddp+EfDlZtn8taMeDD0dcdVVVxXHHXdcGX7injvplVk0UKbfPffcU/Upil/5lV8pVq5cOfPjZkOxbt26mX3Y8uIDH/hANbS/sYUeUCXO+X92TpQk7A0Sb1O/ACUK+8SZVVbe0V7KSjT4ndaH1cWvHc9tq4Ghp0O+9rWvFRdccEFx/vnnF9ddd13Vd9YjjzxSBqPvfe97VZ9Zn/3sZ4tzzz23uPjii+fU+ixkrKEnpLU+NHIeUq1PNEHg7dgf1i9EocLJ8KMsRE0IZVprQqK9EvsUqcbQk6lOhB6QRtLqGM5DDfGUV2Dfz74xfWvDj6ZeXH01zW1eCHPxy8Zz2aox9GSqM6EH7KSohkmrYEYUfmD4URb4nsUl6tMeBiLc0bBZShh6MtWp0BM6GH54+yi0AaL5URS+N9N6hkBTKE4nc4532tGgj2Ul5EkJQ0+mOhl6QlP44fx8v1vgD1A9/CxUmDWuwF/q8xmloYpanhH9iBi7+BJ7a3YlDD2Z6nToCU3hJ1LGCAIQx4a0sO9kdqKkl8SnhVohZk/qjGjcy1WTueAeRCzzYh9Cyn6FXzAUDo5U6/qLZmoYejI1EaEnReLgPH09AMV152Nso0AgoulAehU+xR+Y6gzO1bJREn5ywY+m+DL22j/QnwetRi3YQoVfNIagiWboydTEhZ4UO+6410ha4jzTGDfoqJxidjjO2OZHY0cqZ4PkdE9uej2ElC8m4SX2HWlhPVGNS+H1Tb9ookQI4ofXUoMQ88Q0mBbTZF8WDQqjsI9L35txGL9rjwPpMENPpiY69AR2EhGA6jVAsVPgPNMYdgixj/TiEY0d3wM2xhyfPh7Po+EXSCBQpDU71CC3qSlmf8P0+oUgSlwBQSDhvfoVapni8xlEqb9vTL9eRtQ+sosMPZmaitBTxy9aqlmadkjs5NgB8KtoBNUvcfEIxR9hGpvYEPlRMILtvpOiQTMhIE7zUajJYZ9xuCIEsc/hh1e/INS2RO1SNBxk/tISOxPem+5478VceZGWhdpH8j7UPrHuaEqQBif2pxGuokSNV4e3NUNPpqYy9KT40lELxK+4ph1CesnVkH7x8IMw3ko6bEvZRr1fzaEvYhT2BwSGYeGzikCyUGEfxbiDCAlpEKpPPy292kdGDRDhhf/TgHi4hZ0f06qHI96H7TotIwpKhp5MTX3oqeMXEtX7/Cpq+nJS+HLGzXj4UiwxDPEdjryV45kFHQa2U34tE8jT9hvpNhq/sBcKQxzg4nVDCvYTgWVnHfBlJATokAhAsZ00FWqfCI71AEV3GrAo7F8Zv6m5wWIK23m6rfN+A2LoyVR2oaeOLxFfUr6g/aqGF6r+XUA0KeDsWs7HnYlGeh3GL1I2CHa+7NTZwde3vSicNlnKQYSDWu788vXHNk2IiVNr7LgGsc4iIKXBiEKIYt+bln7bOK8ZEENPprIPPU3SXy9NX8LDDEBRuUQlkjqMHX/a8DP97OuFqv+oFVwoCLG9ME4acHpdIk0AZ4OharD+65b3oB/D4qDR70BBWGIcD/iaJLGdp/vj+ndhCQw9mTL0tNTr/Hecq+7XmK8qD/x/u4rf/oldxeplu4ov/r+H+jcdjDw+jQmnlXq1YeCzj1+jlKZxKG3v9RIlphs79X7BSdJAGHoyZeg5DL0C0ADKY0evKP71mWuKncvWFFuWbSw2/uwni/e+ZW9ZOeCxcMj4XCOwUDsSvzL7ISTxKzRqXGqf55yShiYCzqBOHUhaNENPpgw9SxQHRk41cCBLC8EoDnJJ2f3M1cWuZauLPcevLh5ftbp48ukLh6cDy44qdixbV1z43M3F+1+9q/j0e28r9vxjVVMUp0uicLql6d4gac2TB9u5WD+xvvnclpIwTadS5xl6MmXoGT0qB5I8c7BQEfBnb9hbfPH/qYLUTHA6sHJ18f2nDb5G6WCJG6jF6bn6JaTTHo4IKCx7rA8vr5OyYOjJlKFnPOKWIQQdKhY409EX4WNmpPvesKG476TVxcPPOPFgjRFl87JNB8u6ZTuK9x69pfjkaZuKf1s3E5w2NNQ8He6puWjDVC9LqVEieKRBKy3slGptowaGeYvLwVkfC53KkjQ1DD2ZMvSMz6COsUwnbVbSlGeo0KFCg/ww5+wLB/6YAMGofglpv8v4u1Da3jitKayl7XeofpOUDUNPpgw904ljOGdquOq5HoI41h/2MZ4XEpLqJdox1WuU2oQmxkmDVlpYgHrbqKZpHG5h+rbBkbJj6MmUoScPEYIiMxB8qNyZeNRUtQktvcKapCwZejJl6MkPFSdR0TEVwUeSFsnQkylDT544CxXBh6vbJSknhp5MGXryRS1PBB8aOUtSLgw9mTL05I1mLdHQmQucbNMrKQeGnkwZekQb3wg+ca/CXiVuxbPQLXNoX8xw9inxminfv0iaIIaeTBl6BGp4uF1NnO5abIlQtNCzNhnOqTQDkKRxMvRkytCjQPCpX9FdL3ErnoVumUPNEcO5DU68ph6qIgDNu2GiJA2ZoSdThh4NQoSihZ46wXDuF1QPQJxWW/BRHJI0IIaeTBl6NC5NAYhTZAsFJ0laKkNPpgw96gIun08fl0HD516nvAhFPAyegBTjx6O1uOdQNJqmIbWnzSQ1MfRkytCjriCgpHeL5pQXp8wQQSceir7YEg2tCURMh0BkjZKUL0NPpgw96hqCTnrKq+lB6jSQpnYoanKiTVE0mmY4DanrD1utFx/DIeXJ0JMpQ4+6ivY+EVr4Sy3Q4TZ2JhDxWgJRPLQ9go+P4ZDyY+jJlKFHXUZNTpziGjQfwyHly9CTKUOPckbtT9QmGXykfBh6OuTd7353sWLFiuK4444rzjjjjKpvf+95z3uK5z3veTM772Vl2UyLzRYMPcpd+hgOnz8m5cHQ0xFvf/vbi1NOOaXYvXt3sWfPnmLNmjXFOeecUw1t9id/8ifF8ccfX1x55ZVl986dOw090iIYfKS8GHo64thjjy0+9KEPVV1FccUVV5Q1N4SgJvfee+/Mznp5cdVVV1V9FsfQI80i6MRVYwQfgpCk6WTo6YCHH364DDg333xz1WcW/bZt21Z1zfWJT3yiHH777bcXF1xwQVl6BaQmhh7pkDT48GwwH40hTSdDTwfceuutZYB59NFHqz6z6HfeeedVXXOdf/755XDa/6xfv75405veNLOzPqr4wAc+UI0x10033VS8/OUvP1hWrVpVji9pFsGH+/wQfChc0u7pLmm6GHo6oF/o2cJNSxrQn+EXXnhh1Yed9May3/79+6s+hzz00EPFpz/96YPlwx/+cHHMMcdUQyUFvnKEHoqnu6TpYujpgH6nt7Zv3151zUV/hu/bt6/qUxR33XVX2a/NaS5Pb0m9EXTS0108wkLS5DP0dASnqf7xH/+x6irKBsr9Agz9GZ42ZI7an//+7/+u+vRm6JH649RW+kywdes83SVNOkNPR3BqikvWaXtzxx13FC972cvmXLL+la98pTj11FOL+++/v+pTlMO5tJ1L1a+99tri5JNPLk4//fRqaH+GHqmd9EaG1PoQfuLhpW0QlBjX02TS+Bl6OoQbDZ5wwgllGKnfnPCWW24pXvCCF8w5nQXGY3xe1/YePTD0SO3xZPb0Yahpiae4E2wIQ/xPMKJ/r/FpJL2Y4CRpMAw9mTL0SItH+OHZXZz26hWC6oVaIh50euKJzcMp3hhRGg1DT6YMPdLSEVQ4/bVhw2ywIQzxRHf69XpgKv25QqwenKgdkjRchp5MGXqkbqD2KNoMeVNEabgMPZky9EjdEfcGWrHC01zSMBl6MmXokbqF02MEH06VSRoOQ0+mDD1St3BJe7Tv8fJ2aTgMPZky9EjdQyNoQg9Xc0kaPENPpgw9UvfQnicubScASRosQ0+mDD1SN3FJO6GHuz9zZZekwTH0ZMrQI3VXPPOLuzdLGhxDT6YMPVJ3cZor7t3DHaDrGM4jLHjkRVouu2y2fxQbREtzGXoyZeiRuo2wE6e5CDOEGmp+6KZ/2+IND6VDDD2ZMvRI3Rf37qmXeJ4X9/ShwXMUTovRnxKPuKBfzqgV42o4AiOP+khrxLwRZH4MPZky9EjdR0NmAszatbOhhkbObRs3x31/uMtzzqJheL9CGDIA5cHQkylDjzT94vL3nNv2EBZZB9R4ccqQbkJkvRaN/pp+hp5MGXqk6RdXgfFsr1wRcFgHvdo2pTVB3iJg+hl6MmXokaYfB3oO5tRq5IrTewsFmgiHnObSdDP0ZMrQI00/2qlELUaObVYIOiw7Db/7Yby4RQA1P5pehp5MGXqkPETblRwvXV9MTVe0/fGGkNPN0JMpQ4+Uh7Qhb25i2ds0UqYmLBp+N90QUtPB0JMpQ4+Uh5wvXV9sLVfcEJJ15SXs08nQkylDj5SPaK+S29VJcffqxSx3BKVBXMLO+3ITRI6x8agQGktzCo1gxd8o69cfGoeSU9sigjnraevW2WVnXcQ6GtRnEQw9mTL0SPnI8dL1to2Y6+IS9jZPuY/Hg2zceCi8xIF6EIXpDhs1WiwHd62O9407WLNsg7x7NeEmAiDhL33PfsXQszgzq0x1hh4pH3HahnvW5GIpl+tHSOzVDoog0Cbc0EaI96dw4KYwXwQrAhV/o8SNEyk8XiSmQfgYxqk2jvsEj3R+FyosM8u+GPVA1VR4ZArriPXN8rMuYh0NmqEnU4YeKR8cNOMAkwsOniwvfxeLg23TJez1sEOoYfrUoEV4GdTdr5lWzAPvudTpxikkao/qD60lDKeNtxmXcMayNd29eqHww/bGqap6MCTcMD2my/QHta4Ww9CTKUOPlJd4AGnbRr2TLg7Uh7u8EZriNE8aFAg7aUgYFsJDfG6Uhd6T8eP0UbSL6fVUfqZLWFtMbQrvH1e4Uerhh/cf17pqy9CTKUOPlJc4ZcLfHMSB93BPkXAATw/wFIJUWvMzKnG6jUKYCSwboYN+bU8hEeaWWsPSFH7qNUjjWlcLMfRkytAj5YUDEAcjDo7TjjDAsi62EXMdtURdOYATNNLTXfVTR1Ei2MQpt8MNfW3Uw0+8fxfDTjD0ZMrQI+UnDprDPBB2QRpWlooan66ghiYNGXyetJGJgDMuhB9qo7ocdoKhJ1OGHik/HCA5WHapjcUwRHsc/k4bQhif3zgaAU8DQ0+mDD1SfjhYEgYIP9OMGh6WM5dG22rP0JMpQ4+Un2jrQoPTabbURsyaXoaeTBl6pDxFm5BJaH9xOAbViFnTydCTKUOPlKe4dH0a27tgkI2YNX0MPZky9Eh5ilAwrZeuT3MjZi2doSdThh4pX4QCSpcuxx4UGzGrH0NPh9x+++3FJZdcUlx00UXF9ddfX/VtZ+fOncUuHqzSkqFHytc0X7puI2b1Y+jpiB0zP0sIIevXry/OOeecmS/tsmL79u3V0P54LeNT2jL0SPniZnbsLnjK9jTV9tiIWQsx9HTEmjVris08qa3C/6tWraq6ejsws8c6auanzdqZn26GHkltEA7SRxpMy43ubMSshRh6OuDJJ58sAws1NmHvzF6JfldffXXVp9lZZ51VbNq0qSyGHkltEXTSJ3gnv7kmlo2YtRBDTwfcfffdZWDZs2dP1WcW/Wjf0wvteFay15qxUOjZv39/8aUvfelg2bZtW3HsscdWQyXlKi5hp3BF11LbwhCmaF7YqwyzVslGzFqIoacDCC9NgYV+6SmvFKe1VqxYUb4WC4WeL3zhC8Xzn//8g+Wkk04qT4tJEruRuGkhu4WtW6sBLRBiGJ/2QdGIeKEyrFBiI2YtxNDTAf1qej7ykY9UXXNtmPl5Rgme3pK0FDRo5knZ7EYo1PqsWTMbZvjtVS+9Qg7hiRqXphKn02hHNGg2YlYbhp4OiDY911xzTdWnKPbt21f2u/baa6s+c62e2YMwvKlE7U8/hh5JTaiFiUbObQohh7DE5e9taliiRmnQ7W5sxKw2DD0dcfrpp8/8epr5+VSpX7316KOPFp/5zGeK733ve1WfuazpkTQo1Prw24lCmCCg1EvbkFPHNNlVUUs0yMvlmadhhClNF0NPR1x11VXFcccdV5xxxhnFmWeeWQaY9D49N9xwQ9nvnnvuqfrMZeiRNCni5ojUEA0KNTxM00bM6sfQ0yFf+9rXigsuuKA4//zzi+uuu67qO+uBBx4oLrvssuKJJ56o+szFJe5tTmsFQ4+kcYn2N5RBXc1lI2a1YejJlKFH0jjF6SgaSy9VhCgbMWshhp5MGXokjRPteaLB9FKfAWYjZrVl6MmUoUfSuBF2CCtcwr7YRs3c6LB+6byNmLUQQ0+mDD2SuiAaIC8UWGiyuHHj7Okwxq8XLoUf5t2eNR0MPZky9EjqgvQS9nojZLqpzaEmqB5yuNEh92fl1JaNl9WWoSdThh5JXRF3guZUFTgm8X8acqjJIeQQkgZ5fx/lxdCTKUOPpK6gpiYaNddrdQhE3ntHg2LoyZShR1KXxCXsFGp1tmyxRkeDZ+jJlKFHUtcQdBZxj1Vp0Qw9mTL0SJJyY+jJlKFHkpQbQ0+mDD2SpNwYejJl6JEk5cbQkylDjyQpN4aeTBl6JEm5MfRkytAjScqNoSdThh5JUm4MPZky9EiScmPoyZShR5KUG0NPpgw9kqTcGHoy9bnPfa74sR/7seLnfu7nWpcTTjih+Omf/unGYZZDhXXkelq4uJ7aFdbRz/zMzzQOs8yWn/3Zn3VbalH8zv1cccQRRxTvfe97qyPhdDL0NNi/f3/x0Y9+tLjllltal3/4h38oTj755MZhlkPlT//0T4t169Y1DrMcKr/xG79R/NVf/VXjMMuh8uxnP7vYtm1b4zDLbLn66quLpz/96Y3DLIfKBz7wgWLVqlWNw3Ip//Iv/1Ls3bu3OhJOJ0PPgOzatas49dRTqy71wo7lj//4j6su9fKGN7yhuPjii6su9UIN69e+9rWqS02+9a1vFc94xjOqLvVy5ZVXlj82NN0MPQNi6GnH0NOOoacdQ8/CDD3tGHryYOgZEENPO4aedgw97Rh6FmboacfQkwdDz4AQev75n/+56lIvrCPX08JcT+2wjgw9/RF63JYWRuhxPU0/Q48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9AzA9u3bi3POOac4/fTTi7/8y7+s+k4PblbFrck3b95cliYHDhwo/uIv/qJYs2ZNsWHDhuKOO+6ohhzyT//0T8X69euLN77xjY3T4TW8tt80eB2v7zWNcfriF79YvOtd7yrn/7zzzivuuuuuasghX/nKV4qzzz67eM1rXlO8+93vrvrOxXL99m//dnHmmWeW21bdl7/85fImj/22N6bBTSDZLpumMS4PPPBA8f73v79cR/3WU3ynWIZenzPrj/XI+mS91jGNM844o3j961/fOI3/+Z//Ofh5ve1tb2vc3rrgtttuK+d/586dVZ9D6N/v+5B+p/h+8j2ti2nw3eQ7Wsc0WD9Mg/XFeusK5r2p1PE94fvS6/sQ21u/71TbafTbZjV+hp4luuaaa4ply5aVG/lVV11VfinY0U6T1atXFyeeeGKxcuXKclnrHn300eK0004rl5sdM+vilFNOKfbt21eNMbtz4rUXXXRRccUVVxTHHXfcnB0D0+A1S5nGODHPK1asKD74wQ+WV/IxX0cffXRx4403VmMUxWc+85niyCOPLIexrbCs9W2Fgw/LvWPHjuL8888vl5dxA9vbM5/5zIPTaNrefv/3f7/vNMaJ9cQybtmypZy/d7zjHfO2KeaVfsw747AsLFMq1h3jsi5Yr6zfENOIbYVppNsKB3/uvss00u3t/vvvr8boDr53y5cvLzZt2lT1mcU88x1g+VhOljddRr47sdwsI8vK95TvWjicabDemsLTODBfhDH+piXFPPM9iW2FZeR7FJq2N16TOpxp1LdZdYOhZ4n+6I/+qPiDP/iDqqsobr755nLj50swbdjpsWx17DCPOeaYqmtW7CgD3dyjJ7BzYGcboaZpGjzWoz4NXhfq0xgn1k3dq171qvIXcvizP/uz4rWvfW3VNYtljm2FWhDWbxqUWGfsbAPbWzoNajjS7Y3Lk+vTYPtMp9ElDz/8cHHssceWB9zAvFLrEFgWlollA8ta31ZYJ6zfwDTS7Y1psK1EqOHA1DSN+gFz3AiHa9euLX94pKGHbZ7lSb8PLC/fkcCy8B1Kscx81xDTuOCCC8puNE2jaZtl/XUB80fo6SXCCPvlwPeB71FgW2nah6fbW30arJN+06hvs+oOQ88S/dIv/dK8A15UA0+bXqEnThGk2Bm96EUvKv9/7LHHytfV1xP9uDcG2k6jLp1G13B6Kj14s62wTCmWOQ7Wn/rUp+YtY6zz+HXeNI1XvOIVB7e3ftN45JFHqj7dwrxdfvnl5f/MI93pr2jQj2UD66tpW2HdgHXF+E3bWzxButf29uIXv7jqGj9OK1PDSq1KPfSwzbM8qfic+a6A707T9hbbCtPgmVwEz1CfBuuj3zTGjXljfpjvPXv2VH0PafqcGTe2ldje6tsKr4ntrde2stA06BfTUHfMP4poUfiV2vSF4Y6604bl5ItcR9Vv+qsHl156afGc5zyn/J82Abyu/quHfh/60IfK/5umwY6lPo26dBpdwrriicVpDQbbSvzKDixzVIPzi7v+yzxqbqK9SdM03vKWtxzc3pqmEZ9bl27ix2f7zne+s3yy81lnnXXwIYfMI/Naf+ghyxQ1Eqyv+rbCOmHdoNf2xjQuvPDC8v+m7Y1pPPe5z626xo+gw/co/k9DD9s8y5iqbyt8d+qBhWWOUzdM4+d//ufL/0N9GqyPpm02pjFuLB+nldnnMt+cerv22murobN3Nm8KPbGtxPbWtA+P7a1pGun21msa6Tar7jD0LNHTnva0xi/Mq1/96qpresTBs47TODSSTNG+gl+RuOmmm8rXfe973yu7A/1o1IqmabBDq0+jLp1GV/DLnLYTNGpMsa2kO2SwzLGtsBz1mgbWGctI42U0TeM973lP32nE5xbT6AI+WxrYvvCFLyz+8A//8GDIYR6Z1+9+97tld2CZ4nNmWevbCuuEdYOYRn17S6fRtL0xjdjexo3TWgSdUA89LAfLmIpthe8KWJZ66GGZWXYwjZe85CXl/6FpGk3bbExj3KgRfOKJJ8r/d+/eXe57CT6BbaUp9NS3laZ9eLq91afRtL3Vp5Fub+oOQ88SnXTSSY1fmLe+9a1V1/SIg2cdV89whUyKX5G/8Au/UP5/3333la+7/fbby+5Av/gV2TQNdtj1adSl0+gKfgU3HRTYVtI2GGCZY1v56Ec/WjZ+TrHOWMZ777237G6aBjUf/aYRn1tMo2t+8Rd/8eDBgXlkXr/61a+W3YFlYtnAsta3FdYJ6wYxjfr2xjT6bW9MI7a3cSIAHnXUUcVll11WNoqnEKI5Vcr/YDlYxlRsK3xXwLLUQw/LzLKDadRrtpqm0bTNxjS6Jrb1O++8s+xmW2kKPfVtpWkfnm5v9Wk0bW/1aaTbrLrD0LNEfBnqVZhUe9Z3NtMgdih1LGva+BEc+NPGs7wuvcyTX2X0+/znP192t50Grwv1aXQB89zrs2db4VRU6nnPe97B8WP9pqd2OL1BY9Onnnqq7GYab3/728v/wa9cqvfr04iDFtg+02l0DQdzLvMF88i8nnvuuWU3IvDGQYVlZb2lWK9xYIppxKkhsE7TbaXXNNLtbVy4RJ2anbRw9Rbte/gfLAfLk34f+H7RL7As9dNQfMdiW4lppAfrpmn022a7JvYJcXqO+YzTUIHvQ31badqHp9tbfRp8B/tNo77NqjsMPUtEGwF2JPEl+5u/+ZvyC9CFK4oGLQ6odeyk6R+hJsaLK4rAgS3dAdPINz3ADGIa48a89WvLVd9WWNb6tkIbi/SAUg9Rbba3+jR4+n/aPU7MR8w7+Nzr88v/zHOgO217whVYxx9//MFthemxTqK9DnjN7/3e71Vds939trevf/3r5TTS7a1LCDv1S9ZZHr4DgW0lbTgfVx3FgZdlpZtlD22mwSlI1g+apjFO8fmBBuxvetObyh8BcWozrlDje4Je20q/7W0Q01B3GHoGgB3Fs571rHKjT3ek04KdLctVL6kPf/jDZT9+/cR9ZFJUAf/O7/xOccIJJ5TVwumBO/CaftOInU2/aYxLhLR6iV+DgStBOHVBuwOG17eV66+/vtxZcjqDHS3bVh3T6Le9xTRYPzRmbZrGuMRnzLwxj/zfNH/0Y95jPJYpFQdf1iPrs+lqIqbBATDWRX1bSbfZuH9SVzWFnvg+8F3gO9G0jCwT3yWWkWVlmVP1afAd5buaYhqsn17TGCfmh1qYmDe2h/oVU7Gt8H3he7PQ9sa49e/UYqfRtM2qGww9A8LlktzHIb3x17Tg1AAH9Xqp49JX+tevmkmxk6XdQL2RaeC1/abB63h9v2mMQ7pe6qWO9dlvW+GUFfffaboENwxiGuPCPPVbP4HxWIZoqFrHsrMO0tOBdQttb2222S6gZqVpOdt8H+I7lV6anopp1ANTaqFpjEu6LfXbDmJb6fd9YFi/71TbafTbZjV+hh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPdKE4r4kPIupfn8SHngaz2gaFu6LUr/x4jjxLKQ3v/nN5Twxb5LUxNAjTai4U3Y9fHDQp/8wjeI92vr+979fzst5551XztdiQ0/T3Y4lTSdDjzShOFCvXLmyfBhleqDPLfQwLzyKgIdALjbwwNAj5cPQI00oDtRxwOZZXaEeSJoO6mm/GP/KK688+EywX/3VXy2H/fVf/3X5HCGeA/b+97+/7Id4Dc8kInjx/+tf//p5j3PgPXg2FsP5u2PHjmrIofnfsGFDOZz/m3C6jodgMg6Fmq144GU8yysd1oQnrsd8UHgmF5iH9PWUwGsYj368duvWrdWQQ8vPQyh5bhXPY6ov/5YtW+a851lnnVUNkTQuhh5pQkVoIBRQ28NBGocbeggQ/M8zmF75yleW3Zwyos3Q5z73ueKII44odu/ePec1v/zLv1w+WJFuAsdv/dZvlcPB9AlE0eYoXhPdETjq81ZHWGA6LGd0R2hBvHc/vE8auNKnhDetH8ZlncZ4/KU7phHLwvvyf8xDPIiS8Rmevk/6v6TxMPRIE4oDddSO8H8EgTggh7ahh4cphne+853F0UcfPechljxh+qKLLir/j9d8/OMfL7tx7bXXlv2uuuqqspv/06CB9H35e+KJJ5b/91OfDuGHfswDInD00zQvodf6oaYmxThr164t/4/l/9jHPlZ2Y9u2bWW//fv3Hww9MY+SusHQI00oDsIcnEEQIEBQ2xMH5NDroB796uODWp56kKCb/ojXcIBPnXzyyQdrhxjeVOJ90/nvpWneQM1POv8LhR7WCzU1nG5at25d+ZrQtH4Ytz7flJjfpuXnKdz0i9qwOG1HGOX0nDU90vgZeqQJVQ8N0QaFGg0OtoHaifpBfVCh59Zbby27EQf9yy+/vOzm/34H+vr8N4nwFKfEQv1U00KhJzBuhJGYt6bQQ6iK04VNmpafmjL6ffOb36z6zGJcPgMCV305JI2WoUeaUE2hgdoeAgAH38B4aSiIIDGI0PO3f/u3ZTe4V84znvGM4qtf/WrZTXAgYPTSJvSAZUpDSbx3BAi624aeELVioI1QfT7p12+aTctPDdcpp5xSdc3H+LxO0vgYeqQJ1RQaOJBzcKWECDmcYiG0cMBPg0QcwFNtQw8Nd+lPoTuGg5oUamQ4nUR/CleZxYG/beiJmquNGzeW06DGpB6CFgoo1IDFPDA/zFeEJtZZnPZiODhdSGhjftPX1dcZ89+0/AxPX8twpidpvAw90oTiwNp0CoYDcxoKQACJ/hzseR2vB3/jYB3iYJ1K+6WvueKKK8r/achcR3jgveK908bEvea/CfMc04j5Dk3zX8f7xutpoBxXggXWT4yTSl/H//E63pOQc9dddxUXX3xxce655xaf/exny2FgvPS1LGf9PSWNnqFHkhYpQo+kyeK3VpIWydAjTSa/tZK0SIQeiqTJYuiRJElZMPRIkqQsGHokSVIWDD2SJCkLhh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPZIkKQuGHkmSlAVDjyRJyoKhR5IkZcHQI0mSsmDokSRJWTD0SJKkLBh6JElSFgw9kiQpC4YeSZKUBUOPJEnKgqFHkiRlwdAjSZKyYOiRJElZMPRIkqQsGHokSVIWDD2SJCkDRfH/A6MbJSagde/sAAAAAElFTkSuQmCC">
La loss represente des "sauts" à cause de la reprise de l'entrainement à deux reprises. Cela induit une modification du learning rate et explique la forme de la courbe.
## Résultats
Les questions générées sont évaluées sur les métrique BLEU et ROUGE. Ce sont des métriques approximative pour la génération de texte.
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAApcAAAGFCAYAAAComticAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAF4vSURBVHhe7d0JmBTVvffxvO/z3PfebGq84d7EJSEBlwQl4hIWNe67EEVJVNAQNSioUaLBNQIuERQxgqgoiiBCQAERQRYVFFllEUQEhn3f91XU/9u/U1VQ01M93TA91U3398NzHqZOVVdXdfdM/fqcqlPfMQAAACBLCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXKKgjRo1yvr37+9PZUebNm1cyWf9+vU7KLazIqZPn25dunSp9P3M9euoz7CeX/8f7AppXwCkRrhEbL7zne+kLJV18P7Tn/5ktWrV8qf2T6pQcfbZZ7uSr5o1a2ZVqlRJu52aF34Pvve977nX6kD3OXl94RJep6ZTrUvLaX46kydPdsvVqFHDrStqm7Mlefsri54j6nkUxLQNhRIuC2VfAKRGuERsdFAJgkBUqQwVCZepQkVlbm9FrV271m33M88849ekpvci/H7cd9991qRJE/vhD39od911l7+UJ1i2PMnrSy6B4HMQRctpfjqtWrWyqlWr+lOVK9XnINtS7XuhhUvtJ+ESKGyES8SmvFBxoDZt2mQzZsxwRT8nq4xwmcqOHTts5syZKbcl2bRp02zhwoX+VGbSPcf+BJEgDCZTXXJwS7VsWCbLSHmfg0zDZXnPle4zkUzvQ3nCn4ONGzfahAkTbP369W46it7T8ePH28qVK/2a1Hbt2rX3vco0XC5btsydEqDPQiqrVq2ySZMm2fz58/2a7ArWP2fOHNu5c6dfm1omn0cAhYNwidiUFyoCderUsYsuusif2keBSo9/8803/RpzrXOqC5fkFrvkcBkcqJMlH9jD6wxKcICMCjavvPKKHX300aWWT94WPYce17t3b6tevfre5e644w5/ifLp+ct7Dq07PE8lCEVRovZDWrRo4bqbw1ItG5bJMqLtSrVcqoAVCN6/5BLI5DMRbGf4fSjvdQrmN2rUqNR6dV5rmF43nY4QXqZp06ZlvkCoPljff/3Xf+2dDj8uKBLss/4Pb4O2fejQoW6ZgELfNddcs3cZlQsvvLDUNmjfw/PDJRO33nprqcdUq1bN3nvvPX+uR/XaJ5XDDz/cTWv7w/sCoHBl9tcEyAIdVNKFjyeeeMItt3nzZr/Gc9NNN9nPf/5zf8ps8ODBbjkdbEtKSlwJDryaFzjQcBksp/rgoBgcEINwEojalptvvrnMtmhdepwO/joY64Cv8KHlkoNKMoWGdM8RbKPqkrc5SvJ+qDWuV69e7vEtW7b0az3Jy0bJZBnR+lMtl/w+JAv2KXiu8D5m+pkIHlu7dm3r27evrVmzptzXKVjnX/7yl1LrPeKII2z58uX+UuZOKRg4cKB7X8Ov5d///nd/CU+wvhtvvNE+/fTTvfsQ7HswHWyT/lf9sccea+3atXMtot27d3ety8mvY/369V2dLnTSfulzVbduXbvgggv8Jfa9huFyxRVXuMemE2yj/tc+qhVXX0SSXwsto/VdffXVNm7cOPflMHiuYB8BFC7CJWKjg0qqEhxsli5d6qZ18Ar7j//4D7v33nv9Ke8gquVmzZrl15j7WXXhg+SBhkuJ2g4JwkkgaltEB//wtgTP8dZbb/k15gKA6u68806/JtpVV12V0XPsz8Fb+6Blw0V1UY9N3ucoUesLSnidmk61rqj3IUrU9mT6mQi2M9NRBLRs8mkC6kpXfdTnI0zzTzvtNH/Ko8fVrFnTn9on1b4H7+kll1zi13j0BUD1QagbOXKkm37sscfcdGDQoEGuXv9H0fm1J598ctpufHXhK0Qmb/urr77q1h9+LTSt31l9vsP25/MJ4OBFuERsdFDRgV0HlqgSUDfeMccc40+ZdevWzT02fG7cL3/5SzvllFP8qX1Up3mBOMJlqm259NJLS22L1hWeDiSvL0qmzxHsX/j1TCV43uD1f+edd6x169Z2zjnn2O9+9zvbsmWLv2Rm25i8vuQS0PalWleqgJUsantSvUbJnwk9TlfGZ0rbo9c52WGHHeZaj8PUUnj77bfbxRdfvHcbv//97/tzPVrf9ddf70/tk2rf9dqpXq2VYUF98Nrq90QXY2k9yeV//ud/ypweIKr7wQ9+YFOnTvVrvJCaXGTevHnu+W677TY3HQi+IN1www1+jbeP5557rj+1T/I2AyhMhEvERgeV5EAQpUePHm7ZESNGuGl16SWHBs3XQTOZ6jQvEEe4zHRbNB21/8nri5Lpc+zPwTvV8959991l1pHJNmayjGjdqZbTVeDh/Ukl6rn0uExeo0y3M5BqvcnrUTf3z372M/f6KQjq9QtOewjLdDsDqd7T5Ho9XudhBtuVXJKfs0+fPu7xyedtdujQwdUH5cwzz3T1qbZDgucIaLmofSxvHQAKR/q/4kCW6KASPgCVR11qOsdt7ty57nEvvfSSP8ej87xStVKFL0ZJFS6Tu+uCc/PCNB11gEw+kKbaFnXFhrdF64ra/+T1Rcn0Ofbn4J3qeXWhS/K+Z7KNmSwjQQCKcuWVV7r56UQ9V6rXKPkzkel2BvRaaLuSqeWyefPm7uclS5a45Tp37uymA7pYS/Vhya9tQHXJy0qq9zS5/o033nDTyWExis751bIvvPCCX7PPnj17yhRZvHixe4zCc5iuXld98FqIpqP2MdW+ACgshEvERgeVTA/q6mLTwfvxxx93jwt30YoGClf9xIkT/RpzP6tO8wKpwqW6EMMUPlQfpmldGZssOZxEbYvo/LTwtlQkXGb6HPtz8E71vLp4SusIX2SUyTZmsow0aNDArT98AYjoQhh1IUd1QSeLeq6o1yjqM5Hpdgb0+F/84hf+lCc457Jr165uWkP+aFpjhYb99re/dfVhmo4KXkGLvYb3CUv1nibXBwPLpxp9IFivhknS71byWKaZUFDXhUVhzz33nHve4LUQTRMugeJFuERsdFDRQV0HnagSNmXKFLf8SSed5M7BTBa+Q4taX1SCgKh5geRwKQpkWk7PqQOi1q+fVRcWXI2r8+o0PzggJoeTqG05/fTTy2yL1hEVapLXl0omz7E/B+/gebVdKmqR0rTO29Nrtn37dn/Jsssml0yXCWgbdWFI+/bt3bbqf02rXlcWpxM8V1imn4mox5ZHj9f2a4gsXQGuUq9evTKtpLroSut9+eWX3edKF+Bce+217vFhwfqS6cprzdOV2+HXLNV7GlX/1FNPuToFTHXN66IlvbbarmC5448/3m1b8Bzhko72S+tXS7++fATPl/xaqC5qfan2BUBhIVwiNsFBPapEHYhUf9ZZZ9nbb7/t15Q2ZswYF7B+/OMfu6KfVRemcKmLU5Lp4HjIIYe459BBUs+vn8PUvah6dT1rXnBA1M/Jy3755Zeu6/R///d/U25L1HNI1PpS0XaX9xzaRq0rk4N38LzhovMGg2FswqKWDUrw3kXNC5dk2hddaKOwof8VzjTkUiZSrTOTz0Sqx6aiZbWPKhq+SBfAKFwmh2B9joLTK4LHBO9HWDAvigKhxsvUMsHjUr2nqeq1HQp7P/rRj/beBlQtqsFywbqjSiY0pqu+fOiiKI27GvW+aV1R+5hqmwEUFsIlAAAAsoZwCQAAgKwhXAIAACBrch4uNa6dTmDX1Ys6VykTOtfpjDPOcBce6MrFVOcvAQAAIF45D5fBid86wTvTcKnbqenEeQ1loissFTLDw2AAAAAgN/KmWzzTcDl9+nS3XPhWgBp2I3koDAAAAMTvoAuXGmbju9/9rj/lCR6bPCgzAAAA4nXQhcuOHTu6wZbDgseGB0oOjB071urWrbu3nHDCCS6chusoFAqFQinkonF9dTclIA4FFS6nTp3q1+yzevVqGzJkyN7y7LPPul+ycB2FQqFQKIVc1LDy2muv+UdGoHIddOGyvG7xlStX+jWp6U4qusMJAADFQncqS3W3MyDbDrpwqXCo5cK3XtPV5ple0EO4BAAUG8Il4pTzcKlQGRSFxuDnwMSJE61GjRqlLtapU6eOG4po4cKF7pxK3es306GICJcAgGJDuEScch4u1eqosS6TS2DSpEl24okn2ooVK/wabxD1evXquVBZrVq1/RpEnXAJACg2hEvEKW+6xeNCuAQAFJtch8vt27dTKqns3r3bf5XzB+ESAIACl4tw+c0337hT2ubMmWOzZs2iVGLRaYLr16/3X/ncI1wCAFDgchEuN2zY4I65Cj1fffWV7dmzh1IJZdu2be7UwZKSEv+Vzz3CJQAABS4X4XLx4sXcOS8mu3btci2Y6ibPB4RLAAAKXC7C5dy5c/Oqq7bQ6fQDtRbnA8IlAAAFjnBZ+AiXOUS4BAAUG8JlesnjbEfR8Ihr1qzxp/IL4TKHCJcAgGJDuExPwTI8znaUY4891vr06eNPVa5BgwbZAw88YJdddlna7RLCZQ4RLgEAxYZwmV6+hUvdIKZVq1ZuuzK5PbbC5caNG/2p3CJcAgBQ4AiX6QXhUncBfPnll+3JJ5+0YcOG+XM9UeFy+PDh1qlTJ+vSpYtNnjzZr/UoIIa72vXz/txVUAiXBwHCJQCg2BAu0wvCZY0aNdz/Z511lgt16p4OJIfLxo0bW9WqVa1p06bWqFEjt7yCZkDTyeEyk6AYRrg8CBAuAQDFJl/C5ZVdxsZarn5hnP/M6SnEKViGw2GHDh3s9NNP96dKh0u1VCaHPrVKHnHEEf4U4bJoEC4BAMUmX8LlqY+NtJ/f+25sZX/DpUJceJsnTJjg6ubPn++mw+FSLZWtW7feWxQsVcJBUD8TLosA4RIAUGzyJVzOWLbJpizeEFuZs2qL/8zpKcRVr17dn/Jo+xXsgnMpw+Gyfv361rZt270lCJcqAcJlkSBcAgCKDedcpheEuKFDh/o1Zr1793Z1wRA/4XB577332kknneR+TkXnY2odgSeeeIJwWYgIlwCAYkO4TE8hThfynHnmmfbhhx/undZwQIFwuFy6dKkLfc8884xNmzbN1UnXrl39n8yuv/5615Kpe6xrfZdffnnG4VLLB0WPCU9HIVzmEOESAFBsCJfpKbQpTCoMVqtWzQ455BB3XmWYwmX46vFly5ZZw4YNXa5QAFRp1qyZP9dcd7qmVV+3bt29QTET2o5gneFCuMxDhEsAQLEhXBY+wmUOES4BAMWGcFn4CJc5RLgEABQbwmX+qVmzpv3yl7+MLNu2bfOXyhzhMocIlwCAYkO4zD+7du2ynTt3RpYDQbjMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTLwke4zCHCJQCg2BAuCx/hMocIlwCAYkO4TK+8WysGdFtI3ZUnHxEuc4hwCQAoNoTL9ILbP5YnfG/xyqZbTwa3fNT9zjt27OjPiUa4zCHCJQCg2BAu08u3cHnGGWfYwIEDbfPmzTZ69Gj7/e9/X+Ze52GEyxwiXAIAig3hMr0gXC5evNhatWrlgtzgwYP9uZ6ocDlkyBC7+eab3fK9evXyaz1t2rQp1dWun1V3ILp06WKHHXaYP1UW4TKHCJcAgGJDuEwvCJePPfaYC2kPP/yw65J+5513/CXKhsvGjRtb1apVrWnTpnu7sTt37uzPTYSsxHRyuFTdgXjqqaesXr16/lRZhMscIlwCAIpNLsJlSUlJ2XD53G/jLa9c5D9xegp+NWrUKNX62LVrVzv99NP9qdLhUi2JyUFRrZJHH320P5W9cLl27Vr3uPK65AmXOUS4BAAUm7wJl08dY9b6kPjKfobL//f//p+753fg888/d6Fu/vz5bjocLtVS2bp1671FwVIlHB6zFS71mPLOtxTCZQ4RLgEAxSZvwuW6ErM1s+MrGxf7T5yegl/16tX9KY+2X8Fu8uTJbjocLuvXr29t27bdW4JwqRLIRrhUa2q6YCkKlxs2bPCncotwCQBAgcubcJnHguA3dOhQv8asd+/eri4IbeFwee+997rpFStWuOkoOh9T6wg88sgj+xUua9eunVGwFMJlDhEuAQDFhnCZnsKlLujRmJIaLD2Y1pXjgXC4XLp0qQuKLVq0cMtqyCBdXd6kSRM3X66//nrXkrl8+XLr3r27HXPMMRmHywYNGrhgqXWHSyqEyxwiXAIAig3hMj0FN4VJhcFq1arZIYccUqbVUF3U4ZZN3a2nYcOGLlcoNKo0a9bMn2uuO13Tqq9bt+7e58iElosqqRAuc4hwCQAoNrkIlwfbUEQHO8JlDhEuAQDFhnBZ+AiXOUS4BAAUG8Jl/qlZs6b9+Mc/jizbtm3zl8oc4TKHCJcAgGJDuCx8hMscIlwCAIoN4bLwES5ziHAJACg2hMvCR7jMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTL9NLdXlEGDRpkCxcu9KfyC+EyhwiXAIBiQ7hML5NbM4bvLV7ZtC3BLSX1vNdee62NGzfOn1uWwuXGjRv9qdzKi3B5zz33WNWqVe2II44ocx/PKLrv50UXXeTu+6n7fN50003+nPQIlwCAYkO4TC/fwmXXrl39n/ZtW3kZiXAZ0qJFCxcQdXN33eReL174pu9RqlSp4gLm8uXLbfDgwS7VazoThEsAQLEhXKYXBLh+/frZ9ddfb5deemmZbBEVLrVMw4YNXfDTY8M0L9zVrp8zzSvJ9FjlnVQIlyEKip07d/anzHr16uVePIXNKFEvrt6o8l7wMMIlAKDYEC7TC8Jl0GDVunVr93M4oySHS80/6aST7JlnnrFWrVrtfWxA08nhUnX7a968eXbNNdfY6aef7teURbj0rVmzxr3IEyZM8Gs8quvdu7c/VVbt2rWtY8eO7me9mGr5vOOOO9x0OoRLAECxyZdweW6/c2MtN7x3g//M6QXBr0ePHn6N2QsvvGBHH3207dy5002Hw2VUw9YDDzxQqk4/VyRcBs+hUwe7d+/u10YjXPqmTJniXrTkD5/qOnTo4E+VtWDBAmvcuLGdcMIJ9v3vf9/at2/vzynrk08+sdNOO21vOfHEE+2www7z5wIAUPjyJVye3fdsO+G1E2Ir+xsu/+M//mNvkJRp06a5TKLgJuFwqQtsFP6Sy3/+53+6+aLHViRcBtQIp3XXrVvXrymLcOkL3rSocNmpUyd/qjS9cGeccYY7V3PgwIH29NNPu2ZsvehR1q5da8OHD99bunXr5rriAQAoFvkSLtfuWGurt6+OrWzYmfnQPAp+1atX96c82n5lkuBUvXC4rF+/vrVt23ZvUTd6UALZCpcSPHbp0qV+TWmES59eBL1QUd3iAwYM8KdKU6BUOFSXeuC+++7L+M2iWxwAUGw45zK9ILwNHTrUrzF3ip7qgvEjw+Hy3nvvddMrVqxw01GSu7Pvv//+jPNKMl0spMfOnj3brymNcBmi8yXVmhjQ1d+HHnqoLV682K8p7ZVXXnFDFoWp1bK8FzyMcAkAKDaEy/QULtUTeuaZZ9qHH364d1oX6gTC4VItiMoe6knVsps3b3YZpkmTJm6+6KpzZRSNbqOsU6tWrYzDpcKr1quidfzyl79025MK4TKkZcuWLmBqYNCZM2davXr1rHnz5v5c7zyD4447zr0xwbTemOeff961Xo4ePdouv/zycl/wMMIlAKDYEC7TC8Kkgly1atXcWNrJ40oqo4wcOdKfMlu2bJkbhki54n/+53/snHPOsb/+9a/+XHPd6RpeUblF50sGz5GJW265xU499VT3WOUgrae8RjTCZRJ1ax911FHuzUl+Iz/99FOX9MPNzq+99ppddtll7o3XByDdCx5GuAQAFBvCZeEjXOYQ4RIAUGwIl/ln27Zt5Zb9RbjMIcIlAKDYEC7zT82aNd0YmlGFcHmQIVwCAIoN4bLwES5ziHAJACg2hMvCR7jMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTLwke4zCHCJQCg2BAu0wtus1ieXr16uf3KR4TLHCJcAgCKDeEyvUxuzRi+t3icdOtI3QZS25gK4TKHCJcAgGJDuEwvX8Pl008/bWeeeSbhMp8RLgEAxYZwmV4QLvv162fXX3+9XXrppWW6yaPCpZZp2LChNWrUyD02TPPCgVA/p+t6D5s5c6YLlR9++CHhMp8RLgEAxYZwmV4QLhXiFABbt27tfu7cubO/RNlwqfmNGze27t2720svvWTnnHNOqfCYHAj1s+oy1aBBA3vggQfcz8nrSka4zCHCJQCg2ORLuJx7+hk2u9bJsZVF1zX2nzm9IPjpop1A165d3b2+d+zY4abD4VIh8kc/+pH7OfDYY4+VCo8VCZddunSxU089de9zEy7zGOESAFBschEuFXaiwuWs446PrexvuDz88MP9KU9JSYkLdeqelnC4vO666+zee+915f7777eHHnrIHn74YTvssMPcfDnQcDlv3jz7yU9+YoMGDfJrCJd5jXAJACg2+RIuv9mxw77Zvj228u3Onf4zp6fgdsQRR/hTnoULF7pQN2PGDDcdDpf169e3Zs2a2ejRo0uVcAA80HCpVlF1saubPih6XPBzFMJlDhEuAQDFJl/CZT4Lgt/w4cP9GrOePXu6uk2bNrnpcLhUi2XVqlVt0aJFbjrKSSed5LrWAy1btsw4XCYXPS74OQrhMocIlwCAYkO4TE/hUq2CZ511lvs5mO7QoYO/ROlwuXTpUhf4WrRo4ZZVAFU3dpMmTdx8ufXWW6127druPM727du7rvRMwmUUPU7PkwrhMocIlwCAYkO4TC8IkyNHjnRXaR9yyCEuDIZdcsklrus7sGzZMjcMkXKFzpFUV/ngwYP9uZ6g1VFd6MFzHAg9jnCZpwiXAIBiQ7gsfITLHCJcAgCKTS7C5cE2zmXcFAT1+kSVb7/91l8qc4TLHCJcAgCKDeEy/9SsWdNq1KgRWbZv3+4vlTnCZQ4RLgEAxYZwWfgIlzlEuAQAFBvCZeEjXOYQ4RIAUGwIl4VP4XLDhg3+VG4RLgEAKHC5CJcrVqxwd7hB5duyZYvNmjXLdu/e7dfkFuESAIACl4twuXXrVhd4FixYYGvXrrV169ZRKqEsX77cZs+eXe6dguJGuAQAoMDlIlzKtm3bbOXKlTZv3rzIMn/+/FhK1HNno0Q9VyYlal0HWhYvXuxOP/j666/9Vz33CJcAABS4XIVLFCfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOhEsAAAoc4RJxIlwCAFDgCJeIE+ESAIACR7hEnAiXAAAUOMIl4kS4BACgwBEuESfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOhEsAAAoc4RJxIlwCAFDgCJeIE+ESAIACR7hEnAiXAAAUOMIl4kS4BACgwBEuESfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOeREu77nnHqtataodccQR1qhRI7+2fPfdd58dd9xx9p3vfMeVNm3a+HPKR7gEABQbwiXilPNw2aJFC6tRo4ZNnjzZSkpK7Oyzz7ZmzZr5c6Np/pFHHmn9+/d306NGjSJcAgCQAuESccp5uKxSpYp17tzZnzLr1auXa4lU2Iyies0fPHiwX7N/CJcAgGJDuEScchou16xZ44LihAkT/BqP6nr37u1PldavXz83f/r06a7VU93oqssU4RIAUGwIl4hTTsPllClTXFBcv369X+NRXYcOHfyp0p555hk3X+dnNm3a1Bo3bmyHH354ym7xMWPGWK1atfaWX/3qV3bYYYf5cwEAKHyES8Qpp+Fy2rRpKcNlp06d/KnSVK/5L7zwgl9jdtddd7m6DRs2+DX7rFu3zj744IO9pXv37q4rHgCAYkG4RJxyGi43btzoQmFUt/iAAQP8qdJUr/nLly/3a8zmzJnj6lKdpxlGtzgAoNgQLhGnnF/QoyvFu3Xr5k+Zu1Dn0EMPtcWLF/s1pale88MX9ARd5UuWLPFrUiNcAgCKDeESccp5uGzZsqULmOPGjbOZM2davXr1rHnz5v5cs/Hjx1u1atVs2bJlfo25+RqySEMQDR061KpXr27169f355aPcAkAKDaES8Qp5+FSNCD6UUcd5UJf8iDq6uo+7bTTbOXKlX6NR8tpeT0u0zEuhXAJACg2hEvEKS/CZZwIlwCAYkO4RJwIlwAAFDjCJeJEuAQAoMARLhEnwiUAAAWOcIk4ES4BAChwhEvEKSvhUnfG0VXdGhoo3xEuAQDFhnCJOFU4XDZr1swNYK7AFoRLjV3ZtWtX93O+IVwCAIoN4RJxqlC4fPnll+3aa6+1vn372sMPP2yjR4929WPHjnWDoecjwiUAoNgQLhGnCoVLDWTevXt39/ODDz64N1xu3brVfvCDH7if8w3hEgBQbAiXiFOFwmXDhg1d66XoLjtBuHzvvffcLRvzEeESAFBsCJeIU4XCpW67eOmll9qUKVOsVatWLlwOHDjQrrnmGjedjwiXAIBiQ7hEnCp8Qc+NN97oLug577zz7KSTTnI/V61a1Z+bfwiXAIBiQ7hEnCocLmXEiBHu6vAOHTrY4MGD/dr8RLgEABQbwiXiVKFwefbZZ7uu8YMJ4RIAUGwIl4hThcJlixYtCJcAAOQ5wiXiVKFw+cUXX1iNGjWsW7dutmDBAr82vxEuAQDFhnCJOFUoXKrVUhfwRBV1mecjwiUAoNgQLhGnCoVL3e6xvJKPCJcAgGJDuEScKhQuD0aESwBAsSFcIk4VDpczZ8503eO6W49uB6k79eRrq6UQLgEAxYZwiThVKFx+8skn7vzKY4891ho3bmzNmjWzU045xdUFt4XMN4RLAECxIVwiThUKl3feeWfkhTtqyaxZs6Y/lV8IlwCAYkO4RJwqFC4VLFN1gav1Mh8RLgEAxYZwiTjRcgkAQIEjXCJOnHMJAECBI1wiThXuu1bAbNCggWvBDEqfPn38ufmHcAkAKDaES8QpP0+MrESESwBAsSFcIk4VCpcDBw5051cmU11UfT4gXAIAig3hEnGqULi8++677amnnvKn9unXr5879zIfES4BAMWGcIk4VShcMhQRAAD5j3CJOFUoAd56663WokULf2qfjh07MhQRAAB5gnCJOFUoXE6YMMGqVKlit99+u2vBnDZtmj3wwAN21FFHMRQRAAB5gnCJOFW471qtlAqY6gYPSqNGjfy5+YdwCQAoNoRLxCkrJ0bu2LHDZs6caTNmzLBNmzb5tfmJcAkAKDaES8Qpq1fdLF261GbPnu1P5SfCJQCg2BAuEacDCpe6SrxLly7+lKdu3bp7u8WrV69uQ4YM8efkF8IlAKDYEC4RpwMKlz/84Q9tzZo1/pTZ4MGD7Re/+IV169bNdY+ff/751rRpU39ufiFcAgCKDeEScdrvcDlnzhzXOhlWv359u/nmm/0ps1deecWOO+44fyq/EC4BAMWGcIk47Xe4nD59uguXq1atctMlJSVuunfv3m5aNCxRcgDNF4RLAECxIVwiTvudAHVl+BFHHGE9evRw0506dbJatWq5nwMKl9z+EQCA/EC4RJwOqHmxTZs2rmVSF/bo/z59+vhzPJofdeeefEC4BAAUG8Il4nTAfdfDhw93IXLZsmV+zT6q79evnz+VXwiXAIBiQ7hEnPLzxMhKRLgEABQbwiXiRLgEAKDAES4RJ8IlAAAFjnCJOBEuAQAocIRLxIlwCQBAgSNcIk6ESwAAChzhEnHKi3B5zz33WNWqVd3g7I0aNfJr01u4cKEbZ3N/7gZEuAQAFBvCJeKU83CpwdZr1KhhkydPdreS1MDszZo18+eWT0E0GNA9U4RLAECxIVwiTjkPl1WqVLHOnTv7U2a9evVyYVFhszzdu3e3c845h3AJAEAahEvEKafhcs2aNS4YTpgwwa/xqK53797+VFkrV660n/zkJzZlyhTCJQAAaRAuEaechkuFQwXD9evX+zUe1XXo0MGfKuvPf/6z3Xbbbe7ndOHy448/thNPPHFvOfbYY+2www7z5wIAUPgIl4hTTsPltGnTUobLTp06+VOl6Z7lRx55pG3fvt1NpwuXWvdHH320t/Ts2dN1xQMAUCwIl4hTTsPlxo0bXTCM6hYfMGCAP1WaLv5RoAwXLa//R40a5S+VGt3iAIBiQ7hEnHIaLkVhsVu3bv6U2eDBg+3QQw+1xYsX+zWlJQdLFcIlgGQ7vvra5q/ZZnNXb/VrgOzatOMrW75xh81ZtcWmLt5on5SstaGfr7Q3Jy+17mMXWucPSuxf78+15z4ssRdGz7OXP55vr36y0HqOW2RvTFxsfT9dYv2nLLNBny23d2essGEzV9rIWats1Ow19vHctbZ0ww7/mSqOcIk45TxctmzZ0gXMcePG2cyZM61evXrWvHlzf665+p///Oe2bNkyv6a0IFxminAJHPx0QNfBXAdyHcTbD5ttLftOs+tenmDndhhtv354mP383nf3loue+cgeHjTTHcDXbd3trwWBXXu+sfXbdtvi9dtt1orNNmnhevtw9mobPH259Zm0xIUihaTH3p1l9/efYXf0nmp/7j7JrnlpvPv/9sT0fYn6RxPztZyW750IT+8kQpPWM3HBevti+WZbtG67ex49Xy6E91Pbk24/tV/avz90HW+XPPux/e7JD+3kR0fYcQ+9V+rzVVlFoTRbCJeIU87Dpdx333121FFHudCXPIi6LvpR4NQV4lEULjU2ZqYIl8DBYduuPTZ23jrrkjjA/qXnZLs0cXDXgT3qILy/5eynRtnf+n3mWo/mrSmcls0N23fbwnXb7LMlG+3juWtci1iPcYus0wcl9sjgL9w+3/Tap9boxXF2/tOj7dTHRka+PnGUq54f60JbXEX7HLUdFSn6ElP78ffdF5oGnT+JfN6KFIXzbCFcIk55ES7jRLhEPti++2tbs2WXLVi7zWYs22QT5q9zXWHvfb7SBk5dZv+etMReG7vQXvzIa0lp996X1vadL+z+ATPsb30/sxa9ptiNr01yLXVRB6VMyh8T5bY3prpWP3XTjZ6zxmav2mI7v/ra38r4qEXp00UbrNuYBfbXPlPtvETwiTqYB+WUR0da/c5j3Gug1+TZxGuk1+yjxD58uWKza50KUxe5wtZTw2dHhoxaj4yw5onXVK2geny+0f6oRVH7pxY27a9a1rT/eh30eiTv0/6Wmm2GW712H9iFHT+yq18YZ026TXSvyT1vTrc278y0p0fMsWdGpi8dE8tpeT1Oj7/+lYlufVqv1v+btsMjnz+uoucP76e2L7yf2n51Yb8+fpENSPwujvhilY1P/H5OX7rJnWaxOvF7u233Hv+dOXgQLhEnwiXylro+1W2lP+wHS3lm5Fz7x9ufuwDYrOdka5wIfw2e+8SFpTr/fN9OaF26uzZfy4mJoKHQcnOPT631oJn20sfzbciMFfbZ0o1Z6VbWgVqBttVb012LZNQ2qFz0r4/dQV+hT93gKzZl7xw0ddWqRa9Jtwllujm1/2rh0zlwUe9zZRWds6cvEfryoJa90xMhKLxd6Yo+X3rMZZ3GuM+e1qPw/WTiC4S+qCiAa5/0ZUYhWq+nvujkkoLa5h1fuQCt4KZtUre1vnjpfFmFan0B0/sf/D3Q/5pWvebrnEctr8fp8VqP1qf1HoxBsDIQLhEnwiVySgcEtZgpPKjbrumrk1wXU9SBs5DKsQ8Oda1lZ7b/0C5OBCi1pqmlSGFO53kpUD048HN3DluH4bPdhQE6H0ytKf0+XeK6y9Siota45ICSaVGQ0brUMnp3v89cK6jOKYva3uRyeSK8RLWGpisK2lHrU9FzKwx1TYQgBb+4W1DVlawAptbAGknnbOa6/Pofw+ycDqPs2pfG213/nuZasnVhiAL/lMUbbFkWL/xAYSJcIk6ES8RCF17owK2uPIUYdUtFHUTDRd1XCiNRIaWyiw7i6i7Tyfw630/dZgp9umhEwU/78dDbn7tuNJ38327oly4EqiXslU8WuNCmA78C3ORFG1x3swLAxu1f+a9Iflu1eadrGVKIVRehgq6C/wUdP3JBJ+r92p/y28ffdy2DCs0fz11rW3bmX+vS58s2uW5eBbnHh8xyLYpqldZ7//fEZ0CfBV3YokCsz4hCqT4zwakK+l/Tqg8+Q1pej9Pjg8+Q1us+Q4nn0Wfo+cTrrSuIx81b584HzXXLIgoD4RJxIlwiLZ0Lp/OQdLD8fSLsqatSF0Som/ektiPs+ApcOamT4XUgVmDThRu6alNdr1t30ZV1MFDwCbo0dQ6puiSXRHRpTlvidWmqO5artYH4ES4RJ8IlytDYar0mLLJbXp+8361UunpS3b11n/jAdeNp+I4ru4y1G16Z6FppdMHG+7NWu3OkAADxIFwiToRLuCtpNXCvunijrtLVBQIaw05dpDq/S+PD6apJdfOqFUpDxgAA8hfhEnEiXBahb7/1zidTN7SGo6n2wJBSYfK0x0a64WB09whdsQ0AOLgRLhEnwmUR0LAcGkPx+VHz3Dh/yePMafgSXXCgq0/prgaAwkO4RJwIlwVELZK6kGLw9BXuylOd5xg1uLLOo/zTqxPdkC9qwfxGDwQAFCzCJeJEuDxIffX1N+4qXA2KrOFRGj4/NvLim+oPDHWDYWsoGQ2PoyFxCJMAUFwIl4gT4fIgo2FfdLcUjROYHCRVdL9g3T9YXdwaFBoAAMIl4kS4PEjojioagPmYB4fuDZK6u4vuDa2wqTua6KpvAACSES4RJ8JlHtPA1LqrjQYsDwKlur41LJAGGgcAIBOES8SJcJlndDrkmJK17pZy4SGCdN5k74mLbdtuxpQEAOwfwiXiRLjMExqMXONOqqs7CJQ1Hh7mLsT5ciXDAwEADhzhEnEiXOaQrtrW+JPNek62avfva6XU7RJ1ZfdOzqEEAGQB4RJxIlzm0K2vTy7TSjl39VZ/LgAA2UG4RJwIlzl03EPv2a8TobL/lGV+DQAA2Ue4RJwIlzmiMSjVYnl5pzF+DQAAlYNwiTgRLnPklU8WuHD58KCZfg0AAJWDcIk4ES5zRIOfK1y+PW25XwMAQOUgXCJOhMscqfPPD1y4XLJ+u18DADjobV9vtnGx2aqZZovHmy36xGzJBLNlk81WfObVr/nSbF2J2YaFZpuWmW1ZabZtrdmOjWa7tpp9tcNfWfYQLhEnwmUOrNy80wXLWo+M8GsAHLCdm8zWzvUO5F8MMpvUzWx0O7N3/2bW93qz1y4zG/p3b54O4Dhwu7d5QUjBaPlUs4VjzJZOSvxR+9yr27TUe421XL7atcVs62ovAK6ZnQh8073wt+Ajsznvmc0caPZZb7PJ3c3GP282pqPZh4+bDbvfbNDtZv3+ZNbrKrNXLzF7vq7Zv040a/8Ls9aHZL989JS/0RVHuEScCJc5MGTGChcuNb4lgHJsWGQ27wOzsZ3Mhj9oNqCZ2etXmr14htnTx0cfkNOV504ze+cOs2m9vEBULHZu9gKVWs/mj/ZC1KeveuFp5MNeGNfr2+c6sx71zV4626zzqd7r/M8jo1/LTIoe+2Q1s2dqeOvTe9ftAi/09/y9F9R6X2P27yZmbzY163+z2cBbE0Eu8R4Nvsv7YqBgNyKxje+39YLeiH949VpGy+uxWo/W+fJ5Zi/UM+t0slnHX3nB7/GfRG9bvpcxz/hvXsURLhEnwmUOPDL4Cxcudd9wFCB1a21bkziQLzFbO8ds5QyzJRPNFnxsNmeY2ReJP/Cf9fFaRia84B1ARj3hHTz3HjD/YtZXB8yrQwfM0xMH51MSB8xfJw7Wv0wcMH8afUDan/JU9cSB/nyzfjd4AWNiV7O5I7yuu90xnbKhLsBlU8xm9Eu8Dv/0tuXFM6O3N6ooOKj1SIGl9x+91qUPHvHWFRSFEe1n1OP1Gug5J77k7ffBQi2Iy6d5n6lPX/H2c8g9Zm/d5AUt7W+nWpXXqnawFn1e9JooeCqAKojq90u/Z3rdFFQVWPV7qN9HfXb02o591vudnfGm18Kp7m61eq5P/B1XS2gldGVnE+EScSJc5sAVXT5x4XLSwvV+DcrQH2q1Ki0a64WdWe944WNKDy8AffIvs9Htzd5vY/befWaD7zQbcIsXEhQwejQw635pqGUkUaeDhrq09raMJEKIWkZ0QHYtI4mDiFpGXBhR0GvltXAlBz0dtCsj6OVjUauTuv7e+EPidbrbe91nDvC6Qw+kzPvQe//0nuk9Stf6qOd/5SKzt2/zWtj0/s8e6p2/pla4/bVnp9f9qdYvvZdRz9nuZ17r3bjO3jlwcVM3v87Jmz/KbNobZh8/7b322iaFoANtsX0s8XdPj9X7qd+Nfzf2XtfhD3ndr/qiM/V17/QBtWzqC9HqWV5Xd0VeB+2PgrDOL9T6FIgXj/Oeo+R9L6h9Odj70qXgpi9eU3t6QU6BX13TarnW+//Rk97vp35WvZZRF7ZaYbUevbfq4lboU5e3PiMKfuoKL3KES8SJcBmzPd986271WO2BIbZrzzd+bZHRCexLP/UCo4KGAqKCocKGuiwr0gWXD8W1jFT1DuRqOdLB/OVzvQP66w29g7palxRuFWx1cFfYUYgY38XrqnQHzESIU5DSQVjnE6o7UwdMdRVvXeV1c1aU1qUWGD2fDtzaJr0P2u6ofaus8uxJ3pcAhfopr3n7G9f5kQojeu31/AfTlwSF4C61E1+grkh8WWpu9sGjiUD8nBdI9bnRa6iQqs8Kih7hEnEiXMZs8qINrtWyQefEAf1gopZEtSKWjEzdiqgWhVRF4Urdj1EHyVRFrYJqJdTBs8+1Zm/+2eztFmbvtjQb9oB3MI16rmyUXAS9fKR9Uyuh9l/vs1rQ1IqpoHygRef1qaVs9hCvJSvfqGVNrZZqKdT2qrWw61le96nCnLpS/1XT+3x2ONZruX7i6OwF00f/x1u/LhjROYj68qGWu8/f8n4H1y/wNxTIHOEScSJcxuylj+e7cNnmnTw/t0thSWFKIe6lc6IPggdadDBWt7K6mdXtrDCn7i11f6s7S+crAgCyhnCJOBEuY3br65NduHznszwbPF2BTuctqZtW3bhRobBLndKtiOqKC7ciqjVKLSyTXvbO3VJLy5fvelf76vwtXeACAIgd4RJxIlzG7KS2I1y4XLYhx1cW6uR6DcWibuZnf1M2SLY5zBsy5L17vW7wHRv8BwIADjaES8SJcBkjBUoFy9gHT1erpM6V1JA3amFMde7jKxd6F9fMHe4NpwMAKAiES8SJcBkjdYUrXN7yeiUOnq7he3TxhUKirn5NFSQ1LMlrl5sbX1HjLwIAChbhEnEiXMao9aCZLlx2zdbg6bqqVVdsa6BftTqmuguFrj7V3Tbe+at3PqRu1wYAKBqES8SJcBmj+p3HuHCp4YgOyDd7Ejsw2LtVWtvDo4OkxlfUrds0fImGC9I4d98W6XiaAACHcIk4ES5jogHTf3GfN3j6nq+/9WszpICoO8hoCJ9wkHzmBG9A7tHtvGGDNDg5AABJCJeIE+EyJhMXrHetlr9/LsPB03XLNN36rOvvSgfKDsd5rZKrv/AXBACgfIRLxIlwGZPnR89z4bLt4HJCobqvda9d3f/60Sr7AqXu/KHbI+pew3RxAwD2E+EScSJcxuTmHp+6cDl4+gq/JmT9fLP323r3og4CZdsfefeh1nmTuvUigHKt27HOFmxaYNNWT7PRS0fbO/PesddnvW4vfPaCjVw00jbu2ugviWK3fc9227J7i23YucHW7FhjK7ettKVbltqizYusZGOJzV4/275Y94VNXzPdpqyeYhNXTrSPl35sIxaNsHfnv2tvzX3L3pj1hr3y+Svu89VxckdrN7GdtRnXxu4fc7/9bdTfrMX7Leym4TfZjcNutL+M+IvdOvJWu/2D2+3OD++0v43+m7X6qJU9MOYB+8fYf7jHPTb+MXti4hP25KQn3fqenfKsTV6VvZFFCJeIE+EyJsHg6Ss2hYKi7lOtq7yDQKmibnDd13j7On8hoLht2LXBPl/7uT09+WlrPba13TXqLnfAvuqdq+z8N8+33/b6rZ3w2gkZld+//Xt7dPyjNnTBUFu9fbX/DBAFLQUsBSsFKoWp9xa+54JUjy96uBD11KdPuSCkYKTw9Odhf3YB6pYRt9ht799mf/3wry5Yab5CVhCcHp/wuAtfevwzU56xzlM7u/W9NOMlF9C0foW1vrP7uud7u+RtF+L0/Ppi8OGSD23MsjE2bvk4F/Q0PXj+YPv37H+7xz837TkXzB765CH3/NqeG4beYFe8fYVd+NaFdnqf0yM/D/leus3o5r87FUe4RJwIlzFYtG67C5a1H3/fr/E9U8MLlB1/5Y1LuXaOPwP5buvurbZ2x1pbtnWZzds4z7VyTF091SasmGCjloyyYQuH2dvz3nYHy55f9LSXZ7zsDoAdPu3gDrQ66Lb6uJVrxVCLhg7STYY2sWvfvdYaDW5kVw660hoMbGCX9r/UHRzP63eendX3LDujzxlW5406dmqvUyMPRvtbzuxzpjUc1NBtw8NjH3bb2G9OPxcsZq2b5VoDK5tajfTaKUzoYKowou3R/u/PfipA6PXSa9j0vaalyjXvXhP5mAvevMAFoT6z+7gWq4OdWmfV+jZj7Qwbu3ysDZk/xHp/2dtenP6itZ/U3h4c86BrPVPwUtA+u+/Zka9LoZdTXj/F/R7pM6Pfq3P7net+z/T5qT+wvvv9a/ROI/dZ0u+lPkMKrArP+rzo91ctjQrLnad1ti7TulRKUcjPFsIl4kS4jMGAqctcuGzeK/SHYsMiL1iqKxxl6CA5f9N8m7JqimuxeH/x++5A2X9uf3ewfPXzV+3Fz160f035lztoth3X1nUxqbtJLSg3D7+5TMDYn6KDikLXZQMuc8FOoS5bge5gLGohbDyksWs1VDiOOhBmUtRipfdJLY86mEc9V7hoGbWMKXCqlUthXa1Z41eMd4FeLW3q3szErq93uccpQCvMRz2fwoYCv7rTFa5zTV9iFBYVvvU7oOCvoKj3QJ91vY4K4fp8Ru3P/pR6veu511utfQqfzUY0c62Aag1Uq2BlhCi1YiqgqVVT+6T3WcFNrZ4KcdpHBTr9Tivc6bOg906tpn//6O9ueT1eraBq/VSrpz4f+nKkYKZWWH1G1Cqr97+YES4RJ8JlDB4c+LkLly9/HBo8XYOfK1z2beJXFD618k1fO9217CkkqjVPwVAteDonSWHuYGlJqf1Gbfvdv3/nWr4uH3i5Xf3O1S586eCnA1/LUS3tvo/vcwc/HZh1DtXznz3vuvB6zeplb85503Xr6Ryuj5Z+5Lr6FKR1jtfMdTPdQVEtaQoWOjiu2LbCnRumg6TClM4Zy8bBctX2Va6VS8FFob3T1E6udUvvh1q26vSuE7n/2SxqKWr+fnN3rpm6RvVlQvsdBwUQhVYFqdN6nVZm29T1HvXlozKLWq71hSZ5WzIpao1TQNQ69AVL4UxfvHT+nr6Q6fdO3cyfrvrU5myY41qNd+zhnO5iQLhEnAiXMbjk2Y9duJyyODR4+ls3eeFSd8zJc+t3rncXSKilUCFJYUmhSeFJIUrBQKEq6kAZlKgDYXlFB0l1UQUtKHd8cIdrqXjwkwfdOXMKImoF6zq9q2uxULfmwJKB7lw6nY+lLsFJKycdcPlszWf25fovbeGmhS7Y6TXY9tU2/xUpLgqxCnsKJGo97j6ze2QrVCZFAVutS3qNFWzyjc7tfG3ma661TC15UZ/NOIu6bxUW9cVFX1rUiqigqM+8vpyoJVafU84fRTqES8SJcFnJUg6eHtzzO8/Os1Q3nIKZWnN0bpbORYo66B1I0bp0HpO6t9Ttpe4steTp5H09p7o58zFwAGrdc1cX79p3dbFa4hW6dc5t8tXFyV9W9qdoHWqtVus0kC2ES8SJcFnJxs5b51otr+wy1q9JUKBUsFTAzCG1SGmoC7WCqPvs4v4XR4ZClevevc61WKorWxeo6Nw3XbCiC1fUza0LWXRemA6wOtjqwKsLXhRWAQC5RbhEnPImXK5fv95Wrsy81WrhwoU2ffp0fypzcYfLzh+UuHD56LuhiwPUFa5w2f9mv6LyqWtXQVIhUF3LOmk/KkSq6HwtXTmsiwfy4aIGAEDFEC4Rp7wIl/fcc4995zvfcaVRo0Z+bbQ2bdpY7dq19y5fvXp1u+OOO/y56cUdLv/cfZILl0NmhAZP10U8CpdTe/oV2aPuOV0tqasndbWluqHLu8pZV0Pf+/G97upYDT5d7FdUAkAhIlwiTjkPly1atLAaNWrY5MmTraSkxM4++2xr1qyZP7cshctOnTrZzJkzbc2aNdalSxcXMlWfibjD5a8fHubC5crNO/2ahHY/88LlxiV+xf7T1cUaQFjnLeocxnTDumigaV0UoCuBdZ6jWjCL9QIVACg2hEvEKefhskqVKta5c2d/yqxXr14uLCpsZqpatWp2xRVX+FPlizNczl+zzQXLOv/8wK9JWPm5Fyz/daJfkRm1KOoqaLVGRg2ZEpSL3rrIXb2tIX7Ura2u8DgGwgYA5C/CJeKU03CplkcFyQkTJvg1HtX17t3bn0pPYVFd65mIM1y+OXmpC5e3vREaPH3cc164HHS7X5Hazj07bfjC4W4gYw1JEg6RuvJa40NqeCAtM2s950YC8s327bYn8bdl98KFtnPmTNs+6VP7ev16fy6K3Tdbt9qedevsq2XLbNe8ebbzi1m2Y+pU2zZ+vG0dNco2v/eebRr4tm3s29fW9+hh67q+ZGs6d7Y1HZ+x1U89ZaueeMJWPfqYrWzdxlY89JAtv/c+W37P321Zy5a29PY7bGnzFrbkL81s8Z9vtEU3/MkV/ay6Jbc2Tyxzuy278y5b9re7E4+911Y88KCt+MfDtrJtW1v12OO2ql079zx6Pn12s4VwiTjlNFxOmTLFBUldzBOmug4dOvhT5Wvfvr07BzN5HYGPPvrIfvWrX+0tauU87LDD/LmV677+M1y4fGXMAr8m4Y0/eOFyel+/ojQNP6KxBHWXkORAqQG7Nb6jzo38NvEPhefbnTvtm23b7OtNm9wBcM+qVfbV8uW2e/Fi2z1/vu2aO9c7GM6Y4Q6IOvhsnzjxwMqkSbarpMS+3pzZHW7i9O2uXbZ70SK3jZsHD7Z1r7xia/71rK185BFb3qqVLb3tNlvctKktbPQHm3fJpTb3d2fZ7FNOtVnHHZ+ylJx/gTugb+j1RuI1/MJ/JgQUwL9assR2Jr6Ab5882baO/sg2DxlqG/u9aeu7v2Zrn+uSCD7tXRDS67jkllts0fU32KIm1x9Q0fundSz7653uPdV6Vz3+T1v99NPuuda93M3Wv/66e/5N77xjW0aMcNukELjlgw9s06BBtqF3b7fcmk6dXDBbcf8Dbn2Lb7rZFl3X2OZfXt9KzjnX5vy2duRnIt+Lgm22EC4Rp5yGy2nTpqUMlzqvMp1hw4a5ZQcnDj6pbNiwwcaOHbu3qEVUXfFxuLDjRy5cTluy0av4NhEIH/+pFy637hv0WMP1aFgfjSt58usnlwqUGh5IA5frLioo39ebN9tXK1a4ALYj8dlSMFEA2zF9ugsTu+bMcS0VCmpqtVBwU4D7euNG15rxTSLYBRTytL49q1e75d06E4Fu+6eTbdsnn9iW99+3ze8OsY39+9uGN3rb+ldftbUvvOACkFoeVrZpa8vvu99rzWiRCEI33pQ4oDaxBVddbfMvu9wFnbln/s7mnPZb+7LmbyIPLHGWL39zks278EJ30FdwUMuJAoXChYKGQke27Fm50r0nW0aMdEFvzTP/cq0/at2Zf9llNvvU0yK3MdtF+6xWJT2/Qove74OdPssK5Xp99TlVMNdrvPb5523VP//pXme1nrnglXit555+RuRrU8hl9km1bE6dulZy9tk276KLbcHvr7CFf/ij+yyodVGtj2qJVKukWihXP/mUC6+u9TLmot+9bCFcIk45DZcbE38IFQ6jusUHDBjgT0XTBTxarl+/fn5NZuLqFt+2a48LlqUGT1+W+EOhYNn5VDepO57oYpxwmFTRFdy6C0ehDwOk1imFO4W3nbO+dMFt66jRXmjr2y8R2Lr7rSXtbMU//pEIPX/zWksSAWjBFVfYvAsutLn1Ts+LcJbN4g5+p53mDoBzzzjTHQRLzjvfHQgVCHQwXNDwqsQB8Q8uJIRbg/arJB6rQKmQFbUdUUUtQPPrN4heXwal5OxzItcbVbTfC/94jTvY6yCv8L6+Z08X6DcnvlhuGzfefYlQ8Ffrrlp7y7Pz889d0FI3pD4/Uc85/9LL3JcCdYnqy4g+k/u+oMxydWpBLvMFJfHcyV9QsuGbLVu8sJjYBhfG//1vFxRXPvKoLburpQtE+kzosxK1P/tT9EVHrXxq7dNnQ1+I1Aqo1kC1Cq559lnXSqjWQrUa6gvW9sTf7shW8QzKtsSXfbVA6gvMpsTf+w19+ngtpC++6J5LraTqKtb7pS88rrVaLZLX3+C1eCb+HrjWzsRy+juhvxdq5dT69GVBwUx/V/TFSK2y+ntTzAiXiFPOL+jRleLdunXzp8y1Qh566KG2OPHHO5UDDZYSV7j8eO4aFy4bPh8aPH1MRy9cvvs3Nxm+wlv3ce48rbO7328+U5eta81bsMAdrHWQ0EGmVBdVxDdw7yB4uQsXOohFHdzyragFTS07CnYKHQuubOgddP/8Z9f6o4O7zpla+XBr1yq0+umOtrbL864Ld0OvXrbxzbdcy5FCwdaPP/ZaUj+bbrtmz3aBQS14rtV0R+7v7ayucXWRbxs7zh3odYDXgV2trgqxamWNeo0OpMypW88F5CXNmrnWIX0+FOa2fvSRC3AKa3FQ2FAAUSjROXAK2lHbW9Eyu9bJZb8snH9BxJeFP7rPlz5nJWedHbmudEWnBpSce55bhz6n2i99PnX+3rrE31l9JtW97E6JSHwOv1qxMi8+f6h8hEvEKefhsmXLli5gjhs3zg0vVK9ePWvevLk/11xX9pFHHmlLly5106+//vreYDlq1KhSJRNxhctnRs514fKx8ODpPa/wwuUXg2zuhrkuVF7S/xJ3C7l8oxYTHYDUkqAANb/B7yMPZhUtaglzrSVqDWucCG433ey1ljzwoDv/St1ROiiqVcOdd/XBB267dKGGLtjQhRscHOOjLxbuS0WpUw68Fr1MTznIZ9pWBWy1kurCDLW2KvTpdAaFQH1BUihUONQXJYVFhWWFR4XIqM94RYpalfX7oRZctdbp90JBUb+X+n1Q663OkdTvAVAewiXilPNwKffdd58dddRRLvQlD6I+NXEA0y/FqsTBSjRfY2FGlUzEFS6vf2WiC5fvfe7fdeibPWaPVvHC5Y6N1nV6VxcuHxv/mDc/h9R6odCmVjd1PanlI+pAp6KDnbqi1cqz4MorEwffJomD3q2u26pUF1Xi4Le3i+ojv4sq8doHXVRAMXAXaCUCa8oLtBKhXOE8fIHWjs8+c78nuuodyBbCJeKUF+EyTnGFy2Dw9PXbdnsVi8Z6wfKF092k7tWtcDlm2Rg3Xdl0IFPI0zlrm94eZKvbP+mu1pxTu05kiFTXtc5tUuuhzvPaMWWK6xIHABx8CJeIE+GyEsxZtcUFy3rtQoOnj27nhcthD9iGXRvsxNdOdLdl/Oqbr/wFKkZdw2oB0YUOOu9R57LpPMd0F1CoK0/n1K148EE3ppu6BOM65w0AEA/CJeJEuKwEfSYtceHy9t5T/ZqE7pd64XLOMOs/t79rtdRYlgdC57up63n531u588F0zldUcAwXXSCglkiFyHUvvewGC9a5cQCAwke4RJwIl5Xgnjenu3D56icLvYo9u8zaHm7W5lCz3dvsjg/ucOHy7ZLMftF1vpaGX9HVyeWNATjvwovcOG0apmT9az1s64cfuossAADFjXCJOBEuK8G5HUa7cPnZUn/w9PmjvFbLl89z3eDqDle3uLrHU9Gg37piVV3WySFS50nqqlGdC6lubFogAQDlIVwiToTLLAsGTz/2waH7Bk9/v60XLhP/f7T0I9dq2XhIY2+eT+dMbhn5vjtXssyYgsf/yo2Dp2F5NPSLu9MPAAAZIlwiToTLLPtw9moXLq9+YZxfk/DyeV64nD/a2o5r68LlyzNedkPy6CIa3QmjVJhMFN2lRQNYbxo4kKF7AAAVQrhEnAiXWdZh+GwXLv85xB88ffc271xLnXO5Z5ed2+9cFy5LNpbYwmuvKxUo511yqa164gnX1Q0AQLYQLhEnwmWWXffyBBcuh830B0+fO9xrtex+qbtXuIKlAqYGUVag1B0+dPGNpgEAqAyES8SJcJlF33z7rTvXUuFy7+Dpwx/0wuXo9tZlWhcXLp+Y+IQ7f1LhUvekBgCgMhEuESfCZRbNWrHZBcsz2n/o1yS8eIYXLhePt0aDG7lwOX7FeHdvYoVL3acZAIDKRLhEnAiXWdRrwiIXLv/axx88fcdG73zLR6vY6m0rXLDUMETbpk5xwVIBEwCAyka4RJwIl1nUsu80Fy5fG+sPnj7rHa/V8vWG1nd2Xxcu7x59t61s09aFy7XPP+8tBwBAJSJcIk6Eyyw668lRLlzOWLbJqxhyjxcuP/mXNX+/uQuX78552+ac5t1lZ89K/6IfAAAqEeEScSJcZsmG7btdsNQFPbqwx+lS24XLnUsmWq2etdxdedYMH+KC5cJrrvWWAQCgkhEuESfCZZaM+GKVC5eNXvTHqNT5lmq1fPyn9sHi912r5Q3v3WDL7rzLhcsNb/T2lgMAoJIRLhEnwmWWtHvvSxcu2w390qv4/C0vXPa51v4x9h8uXPb49EX7ssYJNuvXNezrzZu95QAAqGSES8SJcJklf+g63oVLtWA679zhwuW3E16wM/uc6cJlSc8XXavlkltu9ZYBACAGhEvEiXCZBZGDpz97kguX0+e87YLlRW9dZIuuv8GFy81DhnrLAAAQA8Il4kS4zAJdHa5geWYwePrW1V6XeLuf2bNTnnXhstPwNjbr+F/Z7Fon27e7dnnLAQAQA8Il4kS4zAKNa6lweee/p3kV097wwuWbTe3KQVe6cPnZ0w+7Vsvl997nLQMAQEwIl4gT4TIL7ug91YXLnuMWeRUDb3XhcvWEzi5Y1nmjjs279FIXLreN9a8mBwAgJoRLxIlwmQWnt/vAhcuZy/0rwJ+q7sLlG5O9LvF2b9ziguXceqebBWNgAgAQE8Il4kS4rKAyg6evn+91iScC5l9G/MWFywkP3OrC5ap27fxHAQAQH8Il4kS4rKDhM1e6cHnNS+O9ismvuXC5c0Azq9mjpv0mUebUrefC5c4vvvCWAQAgRoRLxIlwWUEfz11jVz0/1p4eMcerePPPLlwOH+0NnN722YYuWJacf4E3HwCAmBEuESfCZba1+5kLl/d/2NKFyzG3NHLhcu0LL/gLAAAQL8Il4kS4zKbVs1yw/LZTLXeFeK1uJ9iXtWq5cLln5Up/IQAA4kW4RJwIl9k0sasLl1MG/Mm1Wv6jzTkuWC66rrG/AAAA8SNcIk6Ey2z6dyJEJsLl08NudeHyoz9e6MLlhj59/AUAAIgf4RJxIlxmi4Yh8s+3vLz/pXb68yfYrF/92r6scYJ9vdkf/xIAgBwgXCJOhMtsWTHdBctlL9R2rZYP3emda7m0eQt/AQAAcoNwiTgRLrNlbCcXLnv0b+RdJX5RXRcuN7/3nr8AAAC5QbhEnAiX2dLrahcumw5oYOf9q4YLlrNrnWzf7tnjLwAAQG4QLhEnwmU26HzLx39qWx/5kbsrz2M3euFyxf0P+AsAAJA7hEvEiXCZDUsnuVbLId3qui7xSbV/48LltnH+LSEBAMghwiXiRLjMho87uHB5T9+L7arHvFbLufVO91o0AQDIMcIl4kS4zIYeDeybRLis8/qp9uw1v3bhcnX7J/2ZAADkFuEScSJcVtQ3e8werWIT2v/UfvPqCTb1JK/lcuesL/0FAADILcIl4kS4rKhFn7gu8Xbd69qfH/CC5fzLLvNnAgCQe4RLxIlwWVHzPjTrdr5d1KuOdW/wKxcu13Xt6s8EACD3CJeIE+EyC+ZtnGenvnSCff7r423W8b+yPStX+nMAAMg9wiXiRLjMgm4zutmdLb0LeRY1buzXAgCQHwiXiBPhMguaDG1i/c/3usQ39u3r1wIAkB8Il4gT4bKCNuzaYGd1PsG+SATLL2ucYF9v3uzPAQAgPxAuESfCZQX1n9vf/nGL1yW+9Lbb/FoAAPIH4RJxIlxW0NbdW23qeWe4cLll+HC/FgCA/EG4RJzyJlyuWbPGli1b5k9lZvr06bZjxw5/KjPZDpc7E+tTsJx98il+DQAA+YVwiTjlRbi877777Dvf+Y4rjRo18mtTmzlzpp122mlu+e9///vWpk0bf0562Q6X63v2dOFyxUMP+TUAAOQXwiXilPNwedddd1mNGjVs8uTJVlJSYmeffbY1a9bMnxutTp06LoQuX77cJkyYYD/84Q+ta4YDl2c7XMquOXNsV8k8fwoAgPxCuEScch4ujzjiCOvcubM/ZdarVy/XIqmwGUXhUPOnTZvm15jdcccddsopmXVLV0a4BAAgnxEuEaechkudZ6mgqNbHMNX169fPnypN9d/97nf9Kc+oUaPcY1ZmcGccwiUAoNgQLhGnnIbLKVOmuFC4fv16v8ajumeeecafKq1jx45Ws2ZNf8oThMupU6f6Nfto3jHHHLO3HHnkkfZ//+//LVVX0fLTn/7Ujj766Mh5FO/1UYmaR/GKXp+qVatGzqPwGcqk6PX55S9/GTmPwmdIjTKPPPKIf2QEKldOw6W6tlOFy06dOvlTpSl0pgqXn332mV+zz8aNG23ixIl7y/vvv2/PP/98qbqKFgXWHj16RM6jTLT777/fzjvvvMh5FK/ovOEBAwZEzqNMtNtuu82uuOKKyHkUr/yf//N/bOTIkZHzKBPtT3/6k91www2R84qh9O3b1xYuXOgfGYHKldNwqeCnUBjVLa4DbZSBAwem7BZfu3atXxMvtRakOkcU5i62ymQUgGL2ox/9yObPn+9PIVm7du3SXuhX7BQuk7+oYx+NSnLvvff6UwAqU84v6NGV4skX9Bx66KG2ePFiv6a0pUuXuiCZfEHP+eef70/Fj3BZPsJleoTL8hEu0yNclo9wCcQn5+GyZcuWLmDq/MtgKKLmzZv7c80++eQTdwGOQmWgbt26LqysWLFi71BEr7zyij83foTL8hEu0yNclo9wmR7hsnyESyA+OQ+X8tprr7mhhE444YQyJxyrhfLcc8+1VatW+TXm/oAqrOiArHlvvPGGPyc3+vTpQ7gsh14fFaSm14dwmRqfofT0+hAuU+MzBMQnL8IlAAAACgPhEgAAAFlDuAQAAEDWEC4BAACQNYTLCtLVh/Xr17fbb7/dxo8f79ce/HQB1fDhw+2pp56yNm3a+LVl3XPPPXbJJZfYTTfdZJMmTfJr99HtOq+//nq79NJLI9ezadMmN2KARgm49dZbbebMmf6cfXTBli7gatiwYbnbErf+/fu7/b7qqquse/futm7dOn/OPum2PZP979mzpzVt2tSuvfbayHXoMXfeeadbh/6PWkcu6PXRNqnoc6LPUzKNdduqVatyt137rH2vyP6nW0eu6bOj7YratnTbno39z2QduaBtjSphce1/unUA2IdwWQEKDQqWgwcPdn9sfvCDH9iQIUP8uQc37U/16tXdGKIaVzSK9l8l2H8NCfXee+/5c83V67G625IGv9eQU/rDHFCw0CgBWocGwtc6tIyGmAqoTuvQcEYaA1XzVZdr2gYdiLp16+YC9OWXX261atWy7du3+0uk3/YD2f8jjjii1Dp0dbAek7yO5cuX+0vkjl6fu+++2733urOWRnZ44okn/Lnetp900knlbns29j/dOvJBkyZN3OulEhbH/i9btsw9RnXhdag+1/R6aHuSSyDY9nT7r33Wvkftv5ZNt//hdeh11OsZXgeA0giXBygITuHWugYNGtjNN9/sTxUG/bHVfibT/v/4xz/2pzzaf4WJgIJ3+/bt/SlvzFKta+XKlW5aoSNqHeE/2gpfyevQH/lwAMsF3UY0bM2aNW7f1MoYSLftmey/DnLhdXTp0sWtIzh46mCXvA59KcjHA5/GGdS2B6K2PTjIB5L3X19U9nf/tY7w7WST15FrGoqtTp06keEyk/3X/oaF93/Hjh32ve99r8z+q07zRC3HUetQfa4F4TKVVNu+P/uvZcvbf73Wes2T16H3BkA0wuUBUldm8l2B9Efq5JNP9qcKQ6pwqRAZ1coS7L9aVPQ4PT5MdYMGDXI/B93BYVpH7dq13c+bN29OuY63337bn8of1apV2zuYfybbfiD7rxD7X//1X67LWVKt47TTTvOn8kfr1q3dF45Aum0v7zXMdP+DdSQLryOX1Dqmm0TojmTaj/C+ZLr/2t+w8P7rsan2P1ivnjNqHcmvay4E26ZtjWpJTbXt+7P/Wra8/ddrnWodeo8AlFX2NwYZ0Xl2N954oz/lUStClSpV/KnCkOqP8zXXXFOmlTa8/zpnSY9LPiCoRUCtb6KurKh1/OQnP3E/l7eOF154wZ/KD2rJ0LYGg+lnsu0Huv/HHnvs3lumRq1DB0YFlnygbVHRebn64jFy5Eh/TvS269zVYNtT7b/qMt3/YB3JwuvIpeuuu86djyoKM+FAl+n+a3/DwvufSbhKFdDC25Ir2oaf/exnduKJJ7pt1vTQoUP9uam3fX/2X8uWt/96rVOtQ+8RgLLK/sYgIxdffLHr5gvTH73//M//9KcKQ6o/ztr/5G6z8P7r4iY9Luh6CqhVLjjv7qKLLopch1rmZNy4cWnXkQ+C1yh8gMpk2w90/9WFWt46tB3BOnJN26Kiz4tCwquvvurPid52nbObbv9Vl+n+B+tIFl5HrqiV+/jjj/enyobLTPc//LmT8P5nEq5SBbTwtuTK2LFj/Z+8fVHLt043CaTa9v3Zfy1b3v7rtU61Dr1HAMoq+xuDjNxyyy2u9S5MrVfqGi0kqf44a/91BXRYeP/VzafHTZ8+3U0HDj/8cHv99dfdz7rSOmodwQFX95NPtQ618OWDrVu3ljlPUDLZ9gPdf7VslrcObUs4tOQL3R9cF33t3LnTTUdtu1qJ0u2/6jLd/2AdycLryJWf//zn7vcrKAozKkHoyXT/o4JRsP+pfn9VFzxPqoAWhKt8ou0K70+qbd+f/dey5e2/XutU69B7BKCssr8xyIj++FStWtW2bdvm15i1aNEiL/8gV0SqP87a/+OOO86f8ug0gWD/d+/e7U6CVzdnYOHChaX+qKdaR/i8PC0ftY7kC2pyIei2HDZsmF9TWrptz3T/dTV6IHg/wutIvrBAXaXhdeSLYP+nTp3qpjPZ9uT912kH+7v/Wj587//kdeSKfldSlUAm+6/9DQvvf/B5idr/4PdQzxe1jvB25IsOHTq4vyuBVNu+P/uvZcvbf73WqdYBIBq/HQcouIJQ40CKgoYOcvl2LmBFBX+ck+mK5yOPPHLvgS9q/3Xg+8Mf/uBPedM6XzAwbdo0t+5gHV988YVbh65ED2h8x/Affq0jHBxyRdusbdfBLpV0234g+68xQ8tbR/B+hdeRK+Ft0AVe2hd9IQuGa8pk27Ox/+nWkS8UZpIDXTb2X+eHJ69DdQGdB6vH6LESrCN8fmwuaDuCbRJ9OfnjH//ozt8NBNuebv+1z4Hk/dey6fZfr3l4HXo99d4AiEa4rAD9QdMfoZo1a9ohhxzirtwsFApC2rfkEv5jH+y/zoE67LDDIvdff4QVwjWeoYJl8gHr+eefd+vQQVVdpnreMHWvX3bZZS6U6PEKX/lwEr22N/y6BCW8/Zlse6b7f9RRR7lTDqLWEbxXWofGWk1eR65om/QFJHit1EobjBQQCO9/1LYHX1oqsv+ZrCMfaPtVwuLafz1Gj021jlwIQp5+f/Q3Rj+fddZZZQbjz3T/te8Huv/hdeh1jFoHgH0IlxWkFpkJEya4b9WFRH/YU5WwTPa/pKTEjQcaPoUgTMPraL3JV8WG6Q+5zj1LvrghV5Jfk3BJlm7bs7H/emy6deRC+HVZu3atX1tauv3XPmvfK7L/mawj14LXKVlc+59uHXHTublffvnl3telvO2KY/8zWQcAD+ESAAAAWUO4BAAAQNYQLgEAAJA1hEsAAABkDeESAAAAWUO4BAAAQNYQLgEAAJA1hEsgDwVj+82fP9+v8QT1lUnrTx7MO5d0v/UmTZq4barsfQcAVBzhEshDukPImWeeaVdffbVf41G40t1IKlMcz5EpDaStbdFtNrVd+xsuFUiT77YCAKhchEsgDykQKRgpWA0bNsyvLb5wqW2pUqWKtWjRYr+DpRAuASB+hEsgDwXh8s4777Tf/va3fm3Z4BcVnsJ1wfL9+/ffe3/m008/3c1r27atu+e57v3+xBNPuDoJHqN7x//mN79xPzds2LDMbfFefvlld199za9bt6517drVn7Nv+/X/Mccc435OpVWrVu5+zf/7v/9b6nn0WK07KKnWoe294oor3P3ttZzu+yzJj1cJ6DHaZtVpH9q3b+/P2bf/Tz31lLuX9CGHHFJm/1u3bm2/+93v9q63vP0DgGJDuATyUBDOli9fbt/97nftueeec/VB8AkEAS4sXBcsr2n9rHuUX3jhhW5aXc26J/yIESPcc0yePLnUY0499VQbOXKkm9Y6L7jgAjdfGjVqZFdeeaUNHTrUTStY6jHBdLD91157rS1YsCDlvecVLLWcnmP8+PH2hz/8odTzBM9dHs3v2bOnbd682U3rMQHNS359unXr5rZV4Vn0/9FHH713Otj/YLuCbdA+BzR/9OjRtmXLFjfdvXt39z8AgHAJ5KUgnAU/V61a1Xbt2rU3+ASiwlO4Llh+woQJblruuOMOO/zww23Hjh1+jbnWu6DlMXjMW2+95aZFoVF1gwcPtvXr17ufu3Tp4s/1hJ9X/2uZdevWuekoGzZscMv07t3brzHXOhg8jwTBrjyar3C5atUqv2afqNenXr16ZequueYa10oswf6/+eabblq0jarTNivw62ctF4RLAMA+hEsgDyn8hEOVuq4ffvjhvcEnEBWewnXJy0vyuiXqMQpSYdWrV3etnRMnTnTzo0qwjqjnSKaWUj1GYTVMQVfPI9qWdOvRc9WpU8etS8E5aD2V8H4FwtsbLsHzRO1/EKiD1t3bb7/dvR56X/72t7/ZmDFjXD0AgHAJ5KXkcKZu8e9973sZhcuTTz55b11FwuWUKVPctAThqkePHm54JP2sbuFUop4jWbCecKuq/Pd//7d7HskkXAaGDBlif/rTn9w6x40b5+qiXh+dZ6rzTVOJ2n9to+qSh4bq06eP3Xzzze41V8syAIBwCeSlqHCmC1VuvfVWF3ICOtdPF+oEPv30U7dcclAMi1p3OIQFj3n66afdtGisSYXbqVOnumk9x1133eV+DgvCV9RzRNHFRUErpSgUhp8nXbicN2+e/9M+Wj7oatdV5s2aNXM/B5o2bRq5zuA0gaj91zYGFwolB0zR8nPmzPGnAKC4ES6BPBQVzgYMGOBCTDgszpo1y44//nh3kY6ClLqU9bhshEtdwKJ6FU0H80Uh8Pvf/767Ylrdwpqn5fVYiXqOKIMGDXLnf2rZiy66qMzzpAuXmt+gQQP3mGAbzjjjDH+ud7GO1qlAGV6vroKvVauW/eUvf3H1eu2C+cH+n3XWWZH7r/l6Hk3//e9/t4svvtjOP/98Nw8AQLgE8pKCSxBmwqLqdVGJWup0gY2uBg8voyCUvHzUOlI9plevXu7n8HmMAbX0qVtYwxglLxNeXzrTp093267W0eHDh/u1nqjtT6arzIPn05igwVXjAdU9+eSTZdajsK4hiFSvYZUCQbhUS6Tq9djwWKOifdXj1KKp5bmwBwD2IVwCQEgQLgEAB4a/oAAQQrgEgIrhLygAhChcqgAADgzhEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFli9v8BHM980numNw4AAAAASUVORK5CYII=">
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAArMAAAGJCAYAAACZ7rtNAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAFhNSURBVHhe7d0JuNXUvf5x732ePm2tA7Xyby/iLS2I9aJUxRG1Uue5Tljn4lCqiHVGbK2ibVUUoYJoqVBAEAoKiMiMAjIjg+gBmZEZmQdBHH//864kh7BP9jkBzj575+T78VkPOyvZ2Ul2jnn3ykqynwEAAAAJRZgFAABAYhFmAQAAkFiEWQAAACQWYRYAAACJRZgFAABAYhFmAQAAkFiEWQAAACQWYRYAAACJRZgFAABAYhFmAQAAkFiEWaTS22+/bcOGDfOHKkarVq1cKWR9+/ZNxHLm07Zt26xr1645306F8D0UwjJUhNGjR7v10L8A0ocwi7zbb7/9spZcHZzOPvtsu/TSS/2h+Mo6aGp5CzkYaPkOO+wwa9SokSvZaFz4O6hVq5ZddtllNn78eH+K0u6991479thjrVq1aq7oteoyabtpntm+V42L2oa9e/e2M844w2rXrm3f/e537Sc/+YmdfPLJdvfdd9uSJUv8qTzhZc8s5e1PCxYscNPVqVPHbYdcfp+5nn8gyftsXOXtVwCqNsIs8i44oEaVXB2c9iXMZjtoBstciAYOHOiWe9KkSX5NdkHYDdancePGVrNmTff+yZMn+1N5Vq5caQ0aNCj5Dvv16+eKXqtO4zRNoLzQEcwnLJhXvXr1rEOHDjZ06FB74403rF27dtawYUO3rGHBPKJKeftTixYtXHivDJUZZrNt86jtnURatzjfL4CqiTCLvMvFAXXjxo02bdo0mzdvnn3++ed+7S65CLPZbN682T788ENX9Lo8mveKFSv8oXjK+wxtXy13HEGYDevUqZN7/x133OHXeIL59urVy6/ZRXWZ32152y9z+oULF7q6zOUJfPnll3bffff5Q57MeeyJqHUPaD/S/qT9SvtXeWbOnGlFRUX+UGn6nMxts3btWn+oNI3bk30j2MZlbfPwttq0aZNNnTrVPvvsMzccZc2aNW6abH9X+6q8v9tMUesEIH0Is8i7OOHjlFNOsfPPP98f2kVhQe9//fXX/ZpdASsoOjWt1rywzDAbvCdTuD4IBZklOKDqdeZ6qPUwc3rVhQUBSuFPrYLBdOVtk0B5n6F5Z44va97B8mTS+8L16ltao0aNMn8UaJym0bRSVrCSzGW74oor7LjjjrPt27f7NeUrb/2y0fsyS7Cc2n+0H4XHZX6GhlWvfVHdFKKmCdO21PjHH3/cDjnkkJL5vvTSS/4UHk2jVulgvIq6V2RSvaZVCeYXtW+oBPRa06v1PRinZR8yZIg/xS7XXnttyTQqWqZwWI/az4KS7fsO03KE3xP1dxvsm+F11LzL268AVG2EWeSdDkI6OJXl6aefdtNt2bLFr/Hcdttt9tOf/tQfMhs5cmTJ/DZs2OAOtjpQK1CFT3fvbZgNHzTDRYLPDQwaNMjV6fPVF1MlCA0aFwgO0AoLOnirD2iTJk3cdCNGjPCnihbnM7R8wXpkLnOUYHnCdOGY3t+2bVu/xmzChAmu7vnnn/drStM4TaNpRZ8bLEcUjQtvw1/84hduffZE5jzi0jIF6x5sIxXtN9p/tBzan7RfBdtT+1sgqKtfv35JIM22nqLP0T5Y1ncnmm/nzp1dK7U+u2fPnm6azO9IdZrf1VdfbRMnTnTLqjCrZdC48DoFVK/P1GfMnj3bhdigv3BYsG76V9tD+6n213PPPdefYtffR7hcddVVbn4fffSRP1W0uH+3wfdzwQUXuK4z+lsJPitYRwDpQ5hF3ukglK0Eli9f7oZ1sAv7zne+Yw8//LB7vXPnTqtbt64LE2E63Zv53r0Js1LWQTPzMzR/1c2ZM8evMfdadeHP1sFZdeoHGtApZdXdc889fk20uJ+Rbf2iBIFB71E577zz7Mc//rGdc8457vRv4OWXX3bzVB/ZbDRO02haKS90aFx4Gx544IFZLyQLlzDNI1spT7DuYcG2034UplZ07W/a7ySYTmEzjuB7L++7ixJ8lk77BzSsv4fMrgraPhqXuZ1E9SeeeKI/5FG3DdUHITIImnfeeacbDgQ/pLKt75/+9Cf73ve+Z+PGjfNrou3J322wzRYvXuzXeMpaRwBVX7yjG5BDwQFLB6LMEqZQdcQRR/hD5lqr9N4gZAT9K++66y43HKYr7MMhpTLC7M9//nN3AVQm1WlcQMsVHg6oPrzMUeJ+Rrb1ixJ8blD0vurVq5da52D7q4Usm+DCM00rZW0/0bjwNjzooINK9YkVTRcuYcE89BmZpTzBOodpWPtPposuush9lvY7Cbbx3Llz3XB5NN9s351aM8PUMt68eXPXIhksoz5L/VcDGj7rrLP8oV203hoXtf6qb9asmT/kyZw++J61fpnl9NNPd2cRMnXr1s29J9zCrFCcWaS8v9ubb77ZH/K22eGHH+4P7VLWOgKo+gizyLvgQFme7t27u2mDU+86xRkOA2Ud0IKDf6AywmzmcCBznkE4yZStPizuZ2QOlyXzc3UqV6FF7w+3BCpIqe6FF17wa0rTOE0ThK5g+0X1yVS/Wo0Lr4/6Zep0czZR65U5jz0Rtc01v6jvIfjsYF/Yk20smmfUcmbO59Zbb3XDaqHW/W/1eUE3lPB+qOGo+QXbPDxtIOo9mdNrvH4YBdsmqoSNGjXKvb9Hjx5+jadPnz6uPig/+9nPXH1Zy5c5/6jPk7LmAaDqi/9/XiBHdBCKOghH0WnU3//+9zZ//nz3vn/961/+GLOlS5e6ugceeMCv2UWnq8OhKDPMPvvss+69madoFZhVHyjroKn68HooiEWdLlYA17hAtgN0tvowzSdb6174M7Rc4fUoS9Tn6h6zer/6QAZ01btu2VXWKXGN0zTBFfKLFi1y83nuuefccFjQB1c/WgJ6v05Vhy80CotaLw3H3Z8yRa279hvtP5m0bPos7XeyJ9tY9DnlfXfLli1z88zsbqILwFQf3g+zrfee7LOSOf1rr73mhsPdYLLR96RWfPVxz/TVV1+VKlLe3224e0PU9yNlrSOAqo8wi7yLOqBmo1OOOvX497//3b1v69at/hiPgoD634XpYhhNq9tLBTLDbNC3MzgdLjNmzLD999/f1QeCMBZ1KyrVh9ejadOm7gKWsClTprjpNC6Q7QCdrT5M89H8NN9A1GfsSdDK9rna9ppH+MlpuiBMdXqoQSbVaVz4ojH1j1TYiZr/Qw895KZXy15A97VVXTiYh0Wtl4bj7k+ZotY9uC2Z9qMwfbfhMLon21j0OZo+qs9s8N0F+1vLli3dcOCkk05y9eHwpuGo9Q6CXpx9VjKDoW6VpWF1q4gS9KPWhVunnXZaqdu3xRH37zbq+5HMZQaQLoRZ5F1wQM1WwqZPn+6m1xOm1Ic206xZs9x4taapn6Fush/VepkZZoMr1oNl6dixo5166qnuterC1J9RfRfV11DjgwNo8N5AEAJ0gNcFUCpaFtVpXCDbATpbfVjwGZpvWZ8RtR7ZZPvcDz74wM3jN7/5jV/jCVoJ9RkKTCrB50XdQioIwApsCioDBgwomT7q9mvBOC2Trs4fPHiwe19woZJKmIb1nmylLNnWPWgt1f6k/Ur7lz5H+1tA885clrLoc/QePfhBF1Gp6LXmEf7u1BquaV955RW33hdeeKFdd911brpweAvWO5O6iWhceJ8NRL0nKhiqJV11wXem70BnM7RcwXRaTn2G5pdZwvOKEvfvNtv3E7XMANKDMIu8Cw5QUUUHwkyqP/PMM+3NN9/0a3anU50KBQcccIC7WEQH2XBfT8kMs6IDYfA+fYYOqvp8vQ7TRS1qRVRLlcYFB9Co5dWV3Aqzhx56qCt6nXl1t96X+RmSrT5TnM+IWo9syvpctc5qXLg1UV599VUXRNSlQEWvVZeNlk/9cPUDQq3fupdsly5d/LGl6UEBOg2tz9apZ91dQVfhq4U+8yb/wfJHlaj9KSyYLpP2H+1H2p+0f2g/yez6sCfbWILl0X6mgKowph9Qmd+dTu+rj+yPfvQj9whfvUf7nN4fDm/B/KLoB0Z4nw1EvSdq3hLez3SPVy2LuhOsXr3ajdd7spXMeUWJ83cbzC9TtmUGkA6EWQAAACQWYRYAAACJRZgFAABAYiU2zLZo0cIuv/xyd2W7+prFoT5Z6qenPne6cjZb/zIAAAAkQ2LDrDr7BxdCxA2zumBEF6boynXd8kehNnzbFwAAACRL4rsZxA2zwa1fws9X122DMm/9AgAAgORITZjV7W++//3v+0Oe4L1qqQUAAEDypCbM6kbt9evX94c8wXvDNygP6NGauudjUI4++mgXhsN1FAqFQqFU5XLQQQfZiy++6B8ZgcJEmC1+rx5bmunTTz91T7kJygsvvOD+qMN1FAqFQqFU5aKGnG7duvlHRqAwpSbMltXNIHiCTVk+/vhj99QhAADS4le/+lXWpy0ChSI1YVZhVNOFH0GpuyHEvQCMMAsASBvCLJIgsWFWITYoCqnB68CUKVOsXr16u13cdcopp7hbcy1ZssT1idUzwOPemoswCwBIG8IskiCxYVatqrrXbGYJTJ061Y455hhbtWqVX+M9NKFhw4YuxNauXXuPHppAmAUApA1hFkmQ+G4GlYUwCwBIm3yH2e3bt1MqoXzxxRf+Fk8mwmxMhFkAQNrkI8x+8803rovgvHnzbM6cOZRKKuqCuXHjRv9bSBbCbEyEWQBA2uQjzCpQ6Zi7YcMG+/LLL+2rr76i5Lhs27bNdctcuHCh/y0kC2E2JsIsACBt8hFmly5dypM582DHjh2uhVb/Jg1hNibCLAAgbfIRZufPn+9aZVH55s6da5s3b/aHkoMwGxNhFgCQNoTZdCHMVnGEWQBA2hBm04UwW8URZgEAaUOYTbZevXrZQw89ZBdddNFu9+LPhjBbxRFmAQBpQ5hNNgVYPSAqeFpqeRRmt2zZ4g8lB2E2JsIsACBtCLPlU1BUYNRTRrt06bLb00UHDRpkbdq0sd69e5dap8ynkAbzCZs1a5Z17NjRFc1f4zOnGT58uLVv395NM23aNL92d4RZOIRZAEDaEGbLp6CoFtB69eqVtITKtdde6wJkkyZNrFatWlajRg0bPHiwGyeZ4TIzcGraQw45xL1X89Aj+sPzF73WNDfccIPdcsstVqdOHRdsMxFm4RBmAQBpUyhh9oqOEyq1XP3yRP+Ty6egqCD74osv+jXmWmgVHidPnuzXmF111VUulAbKC7OaVu8JTJo0yY4++uiSMDtjxgzbf//93fsC+lyF5sx7xe5JmKXPbBVGmAUApE2hhNkT/jbSfvrw25VW9jTMHnjggf6Q56abbrKTTz7ZH/K0bt3aateu7Q+VH2br1q3r3hNWvXr1kjDbuXNn11L7+OOP71YOPfRQ1yUhjDALhzALAEibQgmzH67YbNOXbqy0Mm/NVv+Ty6egqFAZdumll1qzZs38Ic/AgQN3C73lhVlN269fP3/Ic/zxx5eEWfXFPfvss+2JJ55wRfVB0bzCCLNwCLMAgLShz2z5osLsww8/7FpRP//8c7/GrGXLlnb66af7Q+b6woZD59NPP71b4NS0zZs394fMVq1a5cYHYVYXl2l4zJgxbrgshFk4hFkAQNoQZssXFWYVCnW6X/1elyxZYl27dnXh9dlnn/Wn8LoiKJiuXLnSzeOSSy7ZLXB26tTJvUfvnTdvnvsMlSDMisZrONw3t2/fvv4rb9mConmHh6MQZqs4wiwAIG0Is+VTMMwMs6L6hg0b2gEHHOAuEGvXrp0/xqPbaDVt2tSFzFNPPdVNn9l6qqBas2ZN9/6zzjrL3dFAITesRYsWri+u3qvSuHFjf4z3/qA+XAizKUWYBQCkDWE2fxYtWuS/8ixYsMAF0ZEjR/o1FY8wW8URZgEAaUOYzR+1nqqVtU+fPq4/rfrgnnjiif7Y3CDMVnGEWQBA2hBm80dhVg9AUL/bK6+8cre+srlCmK3iCLMAgLQhzKYLYbaKI8wCANKGMJsuhNkqjjALAEgbwmy6EGarOMIsACBtCLPpQpit4gizAIC0yUeY1S2oCLP5QZit4gizAIC0IcymC2G2iiPMAgDShjCbfMOGDXOP0R0+fLhfkx1htoojzAIA0oYwm2x6YtgPfvAD97AFvf75z39uI0aM8MeWRpit4gizAIC04QKwZOvZs2fJtly5cqXVq1fPGjZs6IajEGarOMIsACBtCLPl05O6GjVq5J7QVbt2bdcCKmoBPf300+3AAw+0Y489ttQTvPSesGA+YS1atLCaNWva4Ycf7t4ffE5gxYoV7ulgyiea7uabb/bHRNNnVKtWzR8qjTBbxRFmAQBpUzBhdvWHZiunV15ZN8//4PIFIVTB9d1333V1CoWHHnqoexTtkiVLrGvXrlarVi3XdzUQhN6A5hOu07R6j967ePFiu/7660uF2XPPPdcef/xxKyoqstmzZ9sTTzzhwm02eu+ll17qD5VGmK3iCLMAgLQpmDD73BFmjx9UeaXL+f4Hly8Iofo38OCDD9ohhxxiO3bs8GvMmjdvvtsp/vLCrMKx3hNYtWqVGx+EWX0vGlZ9YOjQoa5OXQoy6X01atSwyZMn+zWlEWarOMIsACBtCibMdmpUuWUPw2xm94ALL7zQbr/9dn/I069fPzvggAP8ofLDrLon6D1hxx9/fEmYfeGFF+y4446zE044wU4++WQXlM844wyrW7fubsFadCcDzXvatGl+TTTCbBVHmAUApA19ZssXFWbVveC8887zhzydOnVy3QYCmWF24MCBu9UplLZu3dof8lSvXr0kzGbOL5u+ffu6+erf8hBmqzjCLAAgbQiz5YsKswqcCpCLFi3ya8xuu+02u+iii/wh7wIwBdKAxofDrALxVVdd5Q+ZTZw40Y466qiSMDty5Eg3vfrUZhMsR5wgK4TZKo4wCwBIG8Js+aLC7Pbt210XgOACLgXTzOCpfrV16tSxu+++212U9eSTT+4WZgcPHuz63Woeer+CrLoRBGFWzj//fHe7LdUNGDDA/asuBxJ0W9CyZZZsCLNVHGEWAJA2hNnyKTSGA2ZA66DAqv6zanXVdJmCIKwW2qj5zJo1yzp27OjKRx995C7geu211/yxHt1L9qabbnKtvnq/tp8E84sq2RBmqzjCLAAgbQiz+aMwGt4OGlZLa3kXce0LhdktW7b4Q8lBmI2JMAsASBvCbP4E4VUXgulhDEGXhVwizFZxhFkAQNoQZvNL96lVqFV3g/A9a3OFMFvFEWYBAGlDmE0XwmwVR5gFAKQNYTZdCLNVHGEWAJA2hNl0IcxWcYRZAEDaEGbThTBbxRFmAQBpQ5hNF8JsFUeYBQCkDWE2XQizVRxhFgCQNoTZZNPTxcp64lcmwmwVR5gFAKQNYTbZ9ibM8jjbSqZnHuuJGHpWcePGjf3a7PSFnn/++XbQQQdZvXr13LOS4yLMAgDShjBbPj3UQEWGDBmyW3gcNGiQtWnTxnr37l1qnYL3BMLzCehhCR07drS+ffu64ahphg8fbu3bt3fTafowwmyBa9asmQukekbxggUL3BfWtGlTf2y06tWruy915cqVbgfTY+LifsmEWQBA2hBmy6dwqQxy+umnuwY2vZZrr73W5YwmTZqUNLwNHjzYjRONC9N8wnWa9pBDDnHvVYPdhRdeWCqc6rXec/LJJ7txmn7gwIH+WMJswVMw7dChgz9k1rNnT/eFKtxGydxJJNgJ4iDMAgDSplDC7CPjHqn0Epfyhc74tm3b1q8x69Kli8sXkydP9mvMrrjiChdsA5n5IzOnaNqrrrrKHzKbOHGi/fCHPywJpzNmzLD999/funbt6oZFLbSnnXaaP0SYLWhr164ttZOI6nr16uUPlaZfLsHONm/ePNeye/fdd7vh8hBmAQBpUyhhtlGfRnZ0t6Mrrdw89Gb/k8unEPqd73zHlixZ4teY3XrrrXb00Uf7Qx6FyiOPPNIfKj/M1q1b11q3bu0PeX70ox+VhNNu3bq5XKL3hYvmoVAqhNkCNn36dPdlZe7sqlPflGwWL15sN9xwg9vBfvCDH5TaScLGjx9vJ554Ykk55phjrFq1av5YAACqvkIJsyM+GWFvLXyr0sqElRP8Ty6fAqRCY1hUiAyCZiD8WjLHH3jggdavXz9/yHP88ceXzFf/nnXWWa7o88JF85Ko5SgLYbYSzZw5M2uYVRN7lE2bNrn+LOprO2DAAHv++efL/JLXrVvnOlUHpXPnzq5rAwAAaUGf2fJFhdl77rnHnf0Ne/rpp61Bgwb+kLkGsqKiIn/IXGNcOMwqszRv3twfMlu1apUbH+QWXRT2/e9/3zXwZUOYLWAKpvpCo7oZ9O/f3x/anQKswqi6KARatmy5245TFroZAADSJh9hVt0Akx5mR4wY4fJF0J9VXRAuuOACe+ihh9ywXH755RZ0fdSZY80jnEmeffZZd/GX5rFo0SK75ppr7IwzzigJp7ofrM4yazjo4qDtpmuIAoTZAqdfPGotDejuBAcffLAtXbrUr9mdOmPrSsIwfcHacYK+JWUhzAIA0oYwW76oMCs6U6yMoUCqf3VBV7hfrTKIcskRRxzhxqulNRxmpUWLFlazZk07/PDD7dFHH7UTTjhht+6UupuT7nKg96nVV/8ee+yx/lgvzKouswTdEDIRZivZfffd5wKtru5TM33Dhg3tzjvv9Meaa7VVR2vdhisY1hf40ksvudbZMWPG2CWXXBK5A0YhzAIA0oYwu2+0Hsof4RCbScEyfNY4G7XeKseo8S6TQm22uzntCcJsHqibgH6xKGRmPjTh/ffft+OOO871MQnoyr+LL77Y3UKjdu3a7r60cVplhTALAEgbwmz+KOSqi4Ea33r06GG33367u4BdXS1zhTBbxRFmAQBpwwVg+aMwe+WVV7ozyPpX3RIyrxWqaITZKo4wCwBIG8JsuhBmqzjCLAAgbQiz6UKYreIIswCAtMlXmF2/fr0/hMryzTffuKxDmK3CCLMAgLTJR5hdvXp1mVf/Izd039o5c+bYV1995dckB2E2JsIsACBt8hFmP/vsMxeqFi5caJ9++imlEsqyZcvcNte/SUSYjYkwCwBIm3yEWdmxY4cLWWqhVdE9Viu6BPPe0xI1r8oquVqW5cuXu5ZZdTVIIsJsTIRZAEDa5CvMAnuCMBsTYRYAkDaEWSQBYTYmwiwAIG0Is0gCwmxMhFkAQNoQZpEEhNmYCLMAgLQhzCIJCLMxEWYBAGlDmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBITZmAizAIC0IcwiCQizMRFmAQBpQ5hFEhBmYyLMAgDShjCLJCDMxkSYBQCkDWEWSUCYjYkwCwBIG8IskoAwGxNhFgCQNoRZJAFhNibCLAAgbQizSALCbEyEWQBA2hBmkQSE2ZgIswCAtCHMIgkIszERZgEAaUOYRRIQZmMizAIA0oYwiyQgzMZEmAUApA1hFklAmI2JMAsASBvCLJKAMBsTYRYAkDaEWSQBYTYmwiwAIG0Is0gCwmxMhFkAQNoQZpEEhNmYCLMAgLQhzCIJCLMxEWYBAGlDmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBITZmAizAIC0IcwiCQizMRFmAQBpQ5hFEhBmYyLMAgDShjCLJCDMxkSYBQCkDWEWSUCYjYkwCwBIG8IskiDRYfbBBx+0WrVqWY0aNaxx48Z+bdlatmxpRx55pO23336utGrVyh9TNsIsACBtCLNIgsSG2WbNmlm9evVs2rRptmDBAmvUqJE1bdrUHxtN4w877DDr16+fGx49ejRhFgCALAizSILEhtnq1atbhw4d/CGznj17upZWhdsoqtf4QYMG+TV7hjALAEgbwiySIJFhdu3atS6YTp482a/xqK5Xr17+0O769u3rxs+aNcu16qpbguriIswCANKGMIskSGSYnT59ugumGzZs8Gs8qmvTpo0/tLt27dq58epf26RJE7vhhhvskEMOydrNYNy4cXbccceVlKOOOsqqVavmjwUAoOojzCIJEhlmZ86cmTXMtm/f3h/aneo1/uWXX/ZrzO69915Xt3HjRr9ml/Xr19s777xTUrp27eq6NgAAkBaEWSRBIsPspk2bXAiN6mbQv39/f2h3qtf4lStX+jVm8+bNc3XZ+tmG0c0AAJA2hFkkQWIvANOdDDp37uwPmbuw6+CDD7alS5f6NbtTvcaHLwALuh4sW7bMr8mOMAsASBvCLJIgsWH2vvvuc4F24sSJVlRUZA0bNrQ777zTH2s2adIkq127tq1YscKvMTdet/DSLbmGDBliderUsUsvvdQfWzbCLAAgbQizSILEhlnRAxBq1qzpQmbmQxPUdeDEE0+01atX+zUeTafp9b6495gVwiwAIG0Is0iCRIfZykSYBQCkDWEWSUCYjYkwCwBIG8IskoAwGxNhFgCQNoRZJAFhNibCLAAgbQizSALCbEyEWQBA2hBmkQR5DbN68pbuOqBbZRU6wiwAIG0Is0iCvIXZpk2bugcWKCAGYVb3ju3UqZN7XWgIswCAtCHMIgnyEmZfeeUVu+6666xPnz722GOP2ZgxY1z9hAkT3MMPChFhFgCQNoRZJEFewqweXNC1a1f3+s9//nNJmN22bZsdcMAB7nWhIcwCANKGMIskyEuYvfLKK13rrOgpXkGYHTp0qHsEbSEizAIA0oYwiyTIS5jVY2Qvuugimz59urVo0cKF2QEDBti1117rhgsRYRYAkDaEWSRB3i4Au/XWW90FYGeffbYde+yx7nWtWrX8sYWHMAsASBvCLJIgb2FWRowY4e5e0KZNGxs0aJBfW5gIswCAtCHMIgnyEmYbNWrkuhokCWEWAJA2hFkkQV7CbLNmzQizAAAUOMIskiAvYXb27NlWr14969y5sy1evNivLWyEWQBA2hBmkQR5CbNqldUFX1FFXRAKEWEWAJA2hFkkQV7CrB5fW1YpRIRZAEDaEGaRBHkJs0lEmAUApA1hFkmQtzBbVFTkuhvoaWB6vK2eBFaorbJCmAUApA1hFkmQlzA7fvx41z+2bt26dsMNN1jTpk2tQYMGri54zG2hIcwCANKGMIskyEuYveeeeyIv9FJLbf369f2hwkKYBQCkDWEWSZCXMKsgm61LgVpnCxFhFgCQNoRZJAEtszERZgEAaUOYRRLQZzYmwiwAIG0Is0iCvJ3TV6C97LLLXAttUHr37u2PLTyEWQBA2hBmkQSF2UG1ABFmAQBpQ5hFEuQlzA4YMMD1j82kuqj6QkCYBQCkDWEWSZCXMPvAAw/Yc8895w/t0rdvX9d3thARZgEAaUOYRRLkJcxyay4AAAofYRZJkJfkeMcdd1izZs38oV3atm3LrbkAACgQhFkkQV7C7OTJk6169erWvHlz10I7c+ZM+9Of/mQ1a9bk1lwAABQIwiySIG/n9NUKq0CrbgVBady4sT+28BBmAQBpQ5hFEuS1g+qOHTusqKjIPvzwQ9u8ebNfW5gIswCAtCHMIgkK4mqr5cuX29y5c/2hwkSYBQCkDWEWSVCpYVZ3MejYsaM/5Dn11FNLuhnUqVPHBg8e7I8pLIRZAEDaEGaRBJUaZg888EBbu3atP2Q2aNAg+9nPfmadO3d23Q3OOecca9KkiT+2sBBmAQBpQ5hFElRamJ03b55rfQ279NJL7fbbb/eHzLp06WJHHnmkP1RYCLMAgLQhzCIJKi3Mzpo1y4XZNWvWuOEFCxa44V69erlh0W26MgNvoSDMAkA67fzqG9vw2Re2dMN2m71yi01ZvMHenfupDZq10npNWWqvvLfI/jFqvv3t7TnWst+H1rzXDLul61T7badJduVLE+yyF8fbRS+8Z+e3G2tntRljZz472ho+846d/PdR1uCvI+2XTwy3eo8NsyMfHWo/ffjt3Yrq/q94XP1Ww+24J0fYiX8baac89Y6d0fpda/TcaDvn+TF2/j/es4vbj3Ofo8/rN32Fv+T7jjCLJKi05Kg7F9SoUcO6d+/uhtu3b2/HHXecex1QmOVxtgBybeP2L+zjVVts7Ly19p+py6z9Owvskf4f2m3d3rfLOoy3E4oDg4KEgsI9/5lpnccttvc/2ei/O702bf/Slm/cYR+v3mrTirfHe/PX2YjZa1yoe33acus5+RO3rV58d4E9P2KeC3ePvvmRPfT6LLu7OOD9/tVpdlOXKS7kXf/KZPf61m5TXf2dPae7ae7rM9NN/0hxKNR7n3hrtpvPM0M/tjbD59qzw+bak4Nm25/6f2T39/3A7nptut3e/X27sfNka/zPiXZJcajT96awp+B3THEIzAyIVb28NGah/43tO8IskqBSm0FbtWrlWl51IZj+7d27tz/Go/FRTwYrBIRZFILtX3xta7futMXrPrMPV2y2yYvW26R9KApz73z8qQ0vWm2DP1xlb32w0rXq9H1/mb02Zam9OvET6zJ+sf3rvUXuAKnQ127kfNcSpfCiIDN1yQabt2arfVq8XPmybedXtqI4ZBWt3LLbNpm4cL09URx8FHiufnminV4ccKIO/ntS1ML25wEfue3zUfF3kBTrtu20BZ9ucyF01JxP7Y3py61LcfBsN3KeC4cPvzGreDvNsN/9e4oLhRf84z0XCNUaGLUd0lbUeqrW1PPajnX7koK4AviDxcG71VtF1rY4vL9c/DfSY9In1n/GCve3of0v/PdWWWXV5h3+t77vCLNIgko/pz98+HAXWlesKH0aRPV9+/b1hwoLYRYBBUqdctQBQ6FSLXwzl23ap2CpgPiXNz+y+/t8YE1fnWY3vDLZnTI8+/kxdspTo+zox4dFHmALsei0qE6f6qB/TadJrtVNB3y1rik47U1RK9/jA4tcK6nC1hUdJ7hTrHsbtP7vL8Pc+6/71yQ3T7X6KbQr0Ie/l/EL1lmnsYusWXFo+dWz2YOwTu0qNOuHgIL9vvj8y69dC+jqLZ/bkvXF+9fqrfbB8k3u1HZ42cJFoempwXOsRXEg/UOPaa7lU6eedTr6qL+UPnW9t0X7ofZHtXz+pnj/1PbT6XSFOrWoqjVVLanPDPk48nusiPLCqPnuO+le/EOrT/GProHFP8D0Y2xM8Q8zbaMPiv8Wtc207bQN1Qq/o3ibYu8QZpEEhdlBtQARZpPns51f2Zrig9mitZ+5MKCDvlpL1Gqi1hO1NCokqVVFYeuO4hBwY+cprtVF/c8UJE975h3Xp61QwmTdPw9xAU4tZmo5UwuaAmM+i4LcvgTLiihBgFafRC2TArROVStA63tWyFTgW7h2m/sxsrfUAvze/HXW4Z0FrkvCScWfGbU8hViOfWKE66t5ecfxdnOXKe6Uvn5AqUVR4fC1yUvtzZkrXautQqFanfVjTS26hMH0IswiCQizMRFm80/h9JP122360o0ulOrCC/XNUxjVBRdqJVJr4PF/zW2oUqDUxRgKMmqt02de2mGcC5ZaBp1+VGtV0A9Qy6ZWKwXmoB+gltn1Axzi9QPU6Xu1DOr0vloHdfpfp4PnrtnqTp+rpS4pwqf81bKp9dEp+ZdGL4xsaYtX5ru+mIXUtSGwftsXbpn0PerHUEX00dSPJ/XbVbcItYKqH6j2L7XYK0Sr24T6i6rfqLoItB421/0tqM/qoFmrXOBWN5RlG7bblh3J2XdQeAizSALCbEyE2Yqlfp/qv6eLakbOWRPqvzd/txCjVtJTn37HBciog35Z5RePDnWtqmqNUj9HnXrVxSZqkVKo/PvgOe4KZIWk3lOXuYtYRs/1T1Uu3+SCpMKzWnc3FwcCXdEMAGlCmEUSEGZjIszuGbVWDflotQuMaq30+u+N2uf+e+oDqFYq9ZtUy5RapHRxkk4jj1uwzrUGqp8cAGDfEWaRBITZmAizZdMpzX+PX+KuhtYVv5khVH0adfo/aCVVv0adig9aSXXKPbOVVPdx1Kl2XcihU9cAgMpFmEUSEGZjIszuoquD1TVAV4ArlOp0fji4/qzlYLuwOLCqX6j6S2p6AKiqtn+13TZ+vtFWfbbKPtnyic3dMNdmrZ1lU1dPtYkrJ9q4FePs3WXv2shPRtrQJUPt7UVv25sL37Q35r9hfeb2sdfmvGbdZ3e3Lh91sX99+C97+YOXXdFr1Wlczzk9rffc3vb6vNdtwIIB9tbCt2zw4sE2fMlwG7V0lI1eNtp9jj5v5baV/pLtO8IskiDRYfbBBx+0WrVquYcxNG7c2K8t35IlS9x9bvfkaWNpDrNzVm1xF5boIia1rIaDa1B0hbTCra6E1oVaQJIs2bzEJq2a5IJCh5kdrOV7Le2hsQ/ZsCXDXDjB3nEhb+dGW/3Z6t1C3vtr3rfpn053r2evn+3qF2xa4KZZvnW5m37tjrXuvVu/2OrmE7bty222fsd6F9oWb15sczbMsZmfzrTJqybbmOVjXMBT2NP32WNOD+v8YWfrOLPjXhXtDzcOudEav9XYLh1wqZ33xnn2q//8yk557RQ7utvRBVle+fAVf0vtO8IskiCxYVYPV6hXr55NmzbNPRpXD2Jo2rSpP7ZsCr7BAxziSkuY3fr5V+4iKN2ySldO6zGKmcFVXQbUD1YXaE1YuJ4LoxJux1c7XGBQcFi3Y50LEiu2rXDBYuGmhS5oKHDMWjfLBRAFkTnr57ggoVBR6LZ8scU+3vCxjV0+1rVutZ3W1u4bfZ9dP/h6O+M/Z0SGgcxy8msn2+3Db7cXpr/gWtjWbPcey11V6Hv8dPunLhh+uO5Dm7J6ir2z9B0XCHt93MuFQYW6p6Y8ZX8e92e7d/S99vsRvy8V8s7sc2ZBh7xclQY9GthpvU+zs/qeZRf1u8iuGHiFXfv2tfa7ob+z24bfZn8Y8Qe7a9Rd9sd3/2j3j7nfWoxtYY+Me8T+MuEv1mpiK/v75L/bM1Oesefef87aTW/nhegZHdzrNu+3sdZTW7tpnpz0pD0+4XH3HTz83sP24JgH3b589zt3W7NRzdzn6PPUYltRCLNIgsSG2erVq1uHDh38IbOePXu6cKpwW5auXbvar3/9a8KsT7c3Uh9V3ZNTzwzPDK4qun+o7gKg+3WqDysqngLi8E+Gu9OHClydZnVyoeuvk/7qWgl1sNJB6reDfmuXDLjEHTRP6nlS5IE1H+WM3mfYxf0vthsG32B3jLzDHWh18NVBWS1jAxcOdC1mOu26N0WhSttDYUoHc81fB/E7R91ptw671ZoMbRJZFCailjezKIhpegWM9jPal7TKKUxou5/Y88RS71EQVoD456x/2viV411oLhTbvtjmfoyotVKnoPvO6+uWU9+JwpS22WUDLrPTe59ear2SUPR9aJ875/Vz3H531VtX2fVvX+/WS/ufwrb2Ee0rWufnpz1vL8580W0DBfNuRd3cfhmctu8/v/+u0/bFf4cK8tpf9b2qtVc/4orWF7nWY7Ucq9VYPwDTgDCLJEhkmF27dq0LopMnT/ZrPKrr1auXP1Ta6tWr7Sc/+YlNnz49tWFWt5zS7a900/R6Ea2uKroZv55T/8a05e6m6VWVDvhqiVQrpFog1fo449MZ7uA14pMRexQs1XIXddCl5L+o1Uwth2q1UiuYTsEOWjTIBRT1cYxLLdQKPwq8ClBRn3X+G+fb/aPvd8EpCMSVUdSK13hQYzu779mRy1VeUThs1KeRW69rBl3jQmHzd5pbi/dauNZAhcF/fvBPe3X2q9Zvfj93Gl/9M6eviQ55O7/O//1/UTEIs0iCRIZZhVEF0Q0bNvg1HtW1adPGHyrtlltusbvuusu9Li/Mvvfee3bMMceUlLp161q1atX8scmzfOMO+2PvGaWCq27wrttc6U4CurXVZ18UXn9Xnf5W4Jy3cZ4LmzqIqi+jDqo6uOogq1NxT0x8wh18dRDWwfi6t6+zKwde6Q7QOsirFeqEnidEHswruigcnNrrVNd6pJCgFiQFHS3Lb978jVsuhQa1Jt085GYXjnUK8k/j/uRakhROdPGHTvHqQhFdOKKQrT6GCt7qClBILUMKMVoufT86Da9l1kUrCnVaH51WVZi8Zdgt7vS0Ws/0o0AtaA+MecC1ounUqVrSFJ70nmenPutClE7tK7AphOr7ViujWnoVqNR6pu2iz1XXh0WbF7nuD/qRoh8rubRp5yZ30Y1ab9UCrNActS/ko2hZ1NqslnK1Hj86/lG3HfWdKMirf7C6XqhrAVAWwiySIJFhdubMmVnDbPv27f2h3fXt29cOO+ww277du5CgvDCreY8dO7akvPrqq65rQ9Js+OwL90z7cIB9+I1Z9p+py/b5GfJ7Qqdgl21d5lpxdLXtkMVD7D9z/+NaPRVadLBVuFEouPzNy11rZ67DgVpTdar43NfPdS2sV791tTv4xw6W6wozWCJ/Plr3kWvNz2w53ZMSnArvWtTVnQrXflfqVHhxkM88Fa4fOmodzbxYCtgXhFkkQSLD7KZNm1wQjepm0L9/f39od7pYTAE2XDS9/h09erQ/VXZJ62agOwroAq3/+4vXleDnjwx2j1NdtXmHP0XuqMVHB1+dllfrY1SQ3JOiFk61bKpFUy2ZatlTi55a8tSHUhem6OCvg74O9jrI6wKWD9Z+4JZlyeYlrrVuw+cb7LMvq263CQCoaIRZJEEiw6wonHbu3NkfMhs0aJAdfPDBtnTpUr9md5lBVqUqhtmvvv7WPeNfF20FLbG3d3/fPTo2F9QyqdOWal3Vlc1RYVRFFyupBVStn2r51MU76r+oU7S6j6LCry68UAhVAFX4JHgCQH4RZpEEiQ2z9913nwu0EydOtKKiImvYsKHdeeed/lhz9T/96U9txYoVfs3ugjAbV6GH2W+/NfdI19NCT9+66qUJNnPZJn+KfafT6TrFHlzhfUqv6FvwqIuAugyoC8GElRNy3ncRAJAbhFkkQWLDrLRs2dJq1qzpQmbmQxN0kZgCru5gEEVhVvemjauQw+w7H39q57cbWxJiz//He65uX3xb/N/8jfPd1du6l+Gv+/w6Mrjq3opNRzR1tzPShT9cUAIAVQdhFkmQ6DBbmQoxzKrVVa2vQYhVq2z/GStcK+2e+ubbb9zFK7r/olpVFVIzg6uu0L956M2uS4Eu4NIFXQCAqoswiyQgzMZUSGFW9369rdv7JSFW/WO7T9yzR25+8fUXNm3NNHfltC6oiroBvwKtgq0Crp4KpMALAEgPwiySgDAbUyGF2Uvaj3Mh9hePDrU2w+fa9i++9seUbenWpfaP6f+wm4bcVCq4quh+qLrnp+4KoHu6AgDSjTCLJCDMxlQoYXbr51+5IHvUX4a6e8jGoacc6RngmeFVN/HXvVT18AE9+hIAgDDCLJKAMBtToYTZEbPXuDDb+J8T/Zrs9FQmPU0pHGB1Oyw9PUtPSAIAoCyEWSQBYTamQgmzTwya7cJs2xHZuwHo8a+6SCt4gtYvu//ShVjuNAAA2BOEWSQBYTamQgmzF73wnguzExau92t20SNV1SdWdx1QiK3fvb49Mu4RW7Et+l67AACUhTCLJCDMxlQIYfazL76yn7UcbLUfGWw7v9p1ZwE9i/2fH/yz5CEGx3Q7xu4ffb97jCsAAHuLMIskIMzGVAhhNrO/7M6vd1rXoq52Ru8zSvrENhvVzD3sAACAfUWYRRIQZmMqhDD7pN9f9tnhs93ts8JP5dLjZYvWF/lTAgCw7wizSALCbEyFEGYv9u8ve/1bTUtC7PWDr7epq6f6UwAAUHEIs0gCwmxM+Q6z4f6yx/do4O5UMHb5WH8sAAAVjzCLJCDMxpTvMDtyjtdf9qJ/dnctspe/ebk/BgCA3CDMIgkIszHlO8z+7e05Lsze3O8pF2b/Oumv/hgAAHKDMIskIMzGlO8we4nfX/aaN29xYXbI4iH+GAAAcoMwiyQgzMaUzzC7q7/s23ZSz5NcmNWjagEAyCXCLJKAMBtTPsPsqDmf+v1le7kge0G/C/wxAADkDmEWSUCYjSmfYfbvg73+srf0a+PC7J/H/dkfAwBA7hBmkQSE2ZjyGWYv7eD1l73xrTtcmO0/v78/BgCA3CHMIgkIszHlK8yG7y/bsNdpLswu3brUHwsAQO4QZpEEhNmY8hVm3/nY6y978ctvuCB7Wu/T/DEAAOQWYRZJQJiNKV9h9im/v+xt/du7MPvAmAf8MQAA5BZhFklAmI0pX2H2sg7jvYu/3r7XhdleH/fyxwAAkFuEWSQBYTamfITZoL+sSqM+jVyYnbdxnj8WAIDcIswiCQizMeUjzI6eu9brL/vSQBdk9cAEAAAqC2EWSUCYjSkfYfbpIR+7MNu0fycXZu8adZc/BgCA3CPMIgkIszHlI8xe9qLXX7bpkIddmP33R//2xwAAkHuEWSQBYTamyg6z4f6yF/e/xIXZWWtn+WMBAMg9wiySgDAbU2WH2THzvP6yl3Qc7oJsgx4N7Otvv/bHAgCQe4RZJAFhNqbKDrPP+P1l7xzQ1YXZ24bf5o8BACTKlzvMdm4127HRbNunZltWmm1aarZhkdm6eWZrisxWzTJbMc1s2WSzT8abLRm392XTMv+D9x1hFklAmI2pssPsb/z+sncNe8yF2Zc+eMkfAyCnPltn9ukcL1DoNaoOhUoFSoVJBUmFSAVIfdcLRhX/j36Q2Yevm8141WzKv8wmvGA29lmzUU+YDXvEbNC9ZgPuMOv7O7Ne15q9+huzLuebdfqV2Ysnmf3jGLM2dc2ePtzs8YPyV95r46/wviPMIgkIszFVZpjd+dU3Jf1lrxx4lQuzU1ZP8ccC2CNffGa2cYnZ8qnFf8hvm03rVhxQnjMb0sLs9VvMul9q1vEUs2drRwcDBZQ3bvPCzaoP/JmmlIKgWhQVAtWCqABYNMALf5OKf3Ar+I34ixf6+t3uBb5uF3thr/3xXtD7+0+it3NVL3//Hy/ktv6Ztx3aHlW8b9X3tkvHk81ebuhtp1fO9gJy14v2vnzY1//C9h1hFklAmI2pMsPsWL+/7MUvjrL63evbL7v/0nZ+vdMfi0T7fIvZtjXF4eoTs7Ufm62cabZ0ktmiMWZzhxQHg/5mM18ze79LcTjoaPbe82bv/t1s+KNmgx8we/MuL1j95wazHld6B67O55r969dm/zzD7KVTzV480eyFY83aHW32/C/MnqtTfACtZfbUYWZ/K96How60e1qeqlE8/3rFn3m6Fwb73GT21h/NRrUyG9/ObHpxYJzzlnfKc/VHZptXeKFyXyhIqTVtxXSzhe94LWhTX/EClFrNBtzphafwQf3fF0Yvf1RR0FCoCL//X41KT6dtqHHvPGk2b1jxcm3yFzABtm8o3u/mei2R2temdDIb/bS3/QbeXRzum5j1vMrbbgpXClsKX5nbIMlF37PCpH6kqDVVAfLfF3itrNp/1Oqq1lcFcm0XtcpqH5vQ3vtBo+CufU+tuArz2pZq3VXA1/6pVl/tq2oFrgIIs0gCwmxMlRlmnxnq9Ze9a8BrrlX2hsHFwQUVa+e24oC1vDhofWi2+D2z2QO9FrtxxUFs9FNmIx/3AuTQh70QOegeL0j2/0NxmLy1+IB3sxcoX7vGC5UKdC78FIdKBcr2x3lBUiGyogIkZe/KX/+fF8oU0BTW9L0qmCiQfDLBCyBlUShZPNb7UaHvOOoz9ANCAUg/QhRq8kE/hqZ3934ADW3p/ehRQNP++NwR0cu9p+Xpmt5+rRD4ylnF+/1lxX8HN3rrPvjB4oD/1+K/obZe6Pugl/eDRttOYU8hWkFPP+iQGIRZJAFhNqbKDLOXd/T6y9474m8uzLadVnxwwC5ffW722driELLYC6MKJGoh++gNL5BO7OC1Nuk0cr/fFwfOxl7rZYcG2U8lV1ZRi6aWoe3/ecvz8mnFy3aOWbdLvOXsUxwMtMxv3e0t/8jHvHVRyJ78srd+s/7jhe/5w70grvXXKfSVM7xWUPX3XL/AO7WuwL51tdf38/PNXuuott++0sUsCia6eEUtpVoOtVR90NtrLVWgefdvxevwkNdiqsCjYBW1TeIWtSxXdmtwNksnev0SX708+rS5WjWD1t09LcEPIrWuKzg+89OK+UGkeekUtn6EKXjqR5srxfuX+oa+/2/v9LRCsVob1aVi/ULvTEIVaWXEniPMIgkIszFVVphVf9naj3j9Za9/+wYXZscuH+uPTTC1hG5dVRx+5nuhS4FD/RcVzKZ29sKaCz/FAe7NZl6oU1BQ3zH1Z1T4e+Z/ow/Se1MUDtoc6c1bLXa9r/M+d9ifvIO7gooCkoLx5H96y6iwNLOnt8wf9fMC5dzBXqhc+K63TsumeAFb66kgqdCtdUfVpkCvfaX39RW7n0aV4AeRgn2HE7xwrx9E6h4wsLnXgqxgqv1SgVRhFNhLhFkkAWE2psoKs+/N9/vLtn/X9ZVVn9ntX233xyaEbjEzravZ2/d5B9qoA/K+FrVW6WCuFiy1uva4wmupU1cAdQ3QAV3hQuFToVktaWqxVKAGKssXxX+7ahHfvt5rIVdrsfpLq+VcfabVgqx+02pZ1z6qH0R6rfqSH0TrctfCDJSDMIskIMzGVFlhtvWwuS7M/nHA665V9qq3rvLHFCgdkNVSqT56CpVRwVNFLaG6EEmnThVwdTpV/U3V/1Sniof/2WzMM8UB9EWvBVQtn/NHeAd4Hdh1ylyBAABQaQizSALCbEyVFWav6DjBhdkHRz7nwuxTU57yxxQAtSqpT6IujlKfRV0MkhlaVafuAeqLpyvO1bcSAJBIhFkkAWE2psoIs+H+srcMvdWF2eFLhvtjK5luLaOWUbWWqgVVraqZwbXVwd5FTOprqguTPp1t9u23/gwAAElHmEUSEGZjqoww+978dX5/2bHWoEcDF2Y379zsj80h9etTX73x//BuOaX7k2YGVxXd9FtX3ev+mrq4JEn31wQA7DHCLJKAMBtTZYTZZ/3+svcMGOiC7MX9L/bHVKCvvzBb/r53s3TdM1X3x2xVrXRwffJQs05nehdxzejht7p+488EAJAGhFkkAWE2psoIs1e95PWXfeSd9i7MPjbhMX9MBVg02ru/5BM/LB1cFWb15CPdD1T3CNVthhR6AQCpRphFEhBmY8p1mA33l71j5F0uzA5cONAfu5d0SyDdHUDP/g6HVz0NSU9C0lOQdHN0bogOAIhAmEUSEGZjynWYHbfA6y974Qvv2Uk9T3JhdsW2Ff7YPaSb9ut+q+GnBv21uvdkKd3UHwCAGAizSALCbEy5DrPPDff6y97/5lAXZH/d59f+mJi+2mk28zXvUZi7tcIe4z3JSncnAABgDxBmkQSE2ZhyHWavfnmiC7OPje7kwmyLsS38MeXQwwSGP2rWutauAKs+sK819u44wK2yAAB7iTCLJEh8mN2wYYOtXr3aHyrfkiVLbNasWf5QfLkMs0F/WXcng3fvd2G2z9w+/tgIuqvA3CFmPa707vUahNjWPzMb+ZjZpmX+hAAA7D3CLJIg0WH2wQcftP3228+Vxo0b+7XRWrVqZSeffHLJ9HXq1LG7777bH1u+XIbZ8X5/2Qv+8Z7rXqAwu2DTAn9sBF28FQRYFT1GdlYZ4RcAgL1AmEUSJDbMNmvWzOrVq2fTpk2zBQsWWKNGjaxp06b+2NIUZtu3b29FRUW2du1a69ixowu1qo8jl2H2+RHzXJh9aMA7Lsie1vs0f0yEndu8APvX/2f21h/NVn/kjwAAoGIRZpEEiQ2z1atXtw4dOvhDZj179nThVOE2rtq1a9vll1/uD5Utl2G28T+9/rJ/HdPNhdk/vlscUrMpGuCF2Vd/41cAAJAbhFkkQSLDrFpWFVwnT57s13hU16tXL3+ofAqn6qoQR67CbLi/7ENj/uTCbPfZ3f2xEfrd7oVZPcELAIAcIswiCRIZZqdPn+6Cqy7+ClNdmzZt/KGytW7d2vWhzZxHYOzYsXbUUUeVFLXiVqtWzR9bcSYsXO+C7Pn/eM8u6HeBC7NF64v8sRm++drsqcO8MLt1lV8JAEBuEGaRBIkMszNnzswaZtUvtjzDhg1z0w4aNMivKW3jxo02YcKEkqIWX3VtqGht/f6yDw8Y74Jsgx4N7BvdrSDK4ve8IPtyGX1qAQCoIIRZJEEiw+ymTZtcGI3qZtC/f39/KJou+NJ0ffv29WviyVU3g6C/7NNje7kw23RE9ovYbGhLL8yOfsqvAAAgdwizSILEXgCmOxl07tzZHzLXynrwwQfb0qVL/ZrS9jbISi7CbLi/7KPjnnBhttOsMvrCtjvaC7OrPvArAADIHcIskiCxYfa+++5zgXbixInudlsNGza0O++80x9rrmvAYYcdZsuXL3fDPXr0KAmyo0eP3q3EkYswO9HvL3te27F2+ZuXuzA7bU2WuzF8OtsLsm2O9CsAAMgtwiySILFhVlq2bGk1a9Z0ITPzoQkzZsxwf4Rr1qxxwxqve9FGlThyEWbbjfT6y7Z8c7ILsr/s/kv7+tuv/bEZxj7nhdm37/crAADILcIskiDRYbYy5SLMXtNpkguzbcb1d2H25iE3+2Mi/OvXXphd+I5fAQBAbhFmkQSE2ZgqOsyG+8v+bWJrF2ZfmP6CPzbD9g1ekP1b8efr9lwAAFQCwiySgDAbU0WH2SmLN7gge27bsXbt29e6MDtuxTh/bIb3/+2F2b6/8ysAAMg9wiySgDAbU0WH2b7vL3Nh9k8DZlj97vVd2f7Vdn9shp5Xe2F2Vh+/AgCA3CPMIgkIszHlos/sji+/tqELx7pW2WsGXePXZvhyh9mTh5o98UOznVv9SgAAco8wiyQgzMaUizArHWZ2cGG29dTWfk2GOW95rbLdLvYrAACoHIRZJAFhNqZchdkmQ5u4MDtq6Si/JkP/P3hhdtJLfgUAAJWDMIskIMzGlIswq3vKNujRwIXZzTs3+7Uh335r9tRhXpjdlP3JZgAA5AJhFklAmI0pF2F2xqczXJC9bMBlfk2GT8Z7QfalU/0KAAAqD2EWSUCYjSkXYfaVD19xYbbVxFZ+TYbhf/bC7DtP+hUAAFQewiySgDAbUy7C7B0j73Bh9u1Fb/s1Gdod7YXZFdP9CgAAKg9hFklAmI2posPsN99+Yyf1PMmF2bU71vq1IWs/9oJs61p+BQAAlYswiyQgzMZU0WF2zvo5Lsie98Z5fk2GcW29MDvoHr8CAIDKRZhFEhBmY6roMPvanNdcmH1k3CN+TYZXzvbC7PzhfgUAAJWLMIskIMzGVNFhdufXO23iyon24boP/ZqQ7RvMWh1s9rfiz/v6C78SAIDKRZhFEhBmY8rFBWBZTe/mtcr+50a/AgCAykeYRRIQZmOq1DDb67demP2gl18BAEDlI8wiCQizMVVamP1yh9mTh3rdDHZu9SsBAKh8hFkkAWE2pkoLsx8P8lpl/32hXwEAQH4QZpEEhNmYKi3MvtnMC7MT2vsVAADkB2EWSUCYjalSwuy333oPSVCY3bTUrwQAID8Is0gCwmxMlRJml07yguyLJ/oVAADkD2EWSUCYjalSwuyIv3hhduTjfgUAAPlDmEUSEGZjqpQw2+5oL8wum+JXAACQP4RZJAFhNqach9l1870gqz6z6jsLAECeEWaRBITZmHIeZsf/wwuzA5v7FQAA5BdhFklAmI0p52G2y3lemJ07xK8AACC/CLNIAsJsTDkNs9s3eE/8+lvx/L/+wq8EACC/CLNIAsJsTDkNszN6eK2yva/zKwAAyD/CLJKAMBtTTsOsQqzC7IxX/QoAAPKPMIskIMzGlLMwq24F6l6gbgbqbgAAQIEgzCIJCLMx5SzM6oIvtcp2PtevAACgMBBmkQSE2ZhyFmYH3u2F2fHt/AoAAAoDYRZJQJiNKSdhVg9H0EMSFGbXzfMrAQAoDIRZJAFhNqachNnlU70gq8fYAgBQYAizSALCbEw5CbMjH/fC7PBH/QoAAAoHYRZJQJiNKSdh9sUTvTC7dKJfAQBA4SDMIgkIszFVeJjdtNQLsuozq76zAAAUGMIskoAwG1OFh9mJL3phdsCdfgUAAIWFMIskIMzGVOFh9ssdZvOHm63+0K8AAKCwEGaRBITZmHLSZxYAgAJGmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBIkPs2vXrrUVK1b4Q/HMmjXLduzY4Q/FQ5gFAKQNYRZJkOgw27JlS9tvv/1cady4sV+bXVFRkZ144olu+h/84AfWqlUrf0z5CLMAgLQhzCIJEhtm7733XqtXr55NmzbNFixYYI0aNbKmTZv6Y6OdcsopLvSuXLnSJk+ebAceeKB16tTJH1s2wiwAIG0Is0iCxIbZGjVqWIcOHfwhs549e7oWV4XbKAqjGj9z5ky/xuzuu++2Bg0a+ENlI8wCANKGMIskSGSYVT9ZBVO1roaprm/fvv7Q7lT//e9/3x/yjB492r1n9erVfk12hFkAQNoQZpEEiQyz06dPdyF0w4YNfo1Hde3atfOHdte2bVurX7++P+QJwuyMGTP8ml007ogjjigphx12mP33f//3bnX7Wv7nf/7HDj/88MhxFG/7qESNo3hF26dWrVqR4yjsQ3GKts/Pf/7zyHEU9iE1Aj355JP+kREoTIkMs+oqkC3Mtm/f3h/anUJutjD7wQcf+DW7bNq0yaZMmVJSRo0aZS+99NJudftaFJC7d+8eOY4yxR555BE7++yzI8dRvKJ+3/37948cR5lid911l11++eWR4yhe+a//+i8bOXJk5DjKFPvd735nN998c+S4NJQ+ffrYkiVL/CMjUJgSGWYVNBVCo7oZ6MAeZcCAAVm7Gaxbt86vqVxqDcnWxxfmLs6Lc5eKNPvhD39oixYt8oeQ6Zlnnin3wtC0U5jNbBjALrprzsMPP+wPAShEib0ATHcyyLwA7OCDD7alS5f6Nbtbvny5C66ZF4Cdc845/lDlI8yWjTBbPsJs2Qiz5SPMlo0wCxS+xIbZ++67zwVa9Z8Nbs115513+mPNxo8f7y7YUogNnHrqqS4crVq1quTWXF26dPHHVj7CbNkIs+UjzJaNMFs+wmzZCLNA4UtsmJVu3bq5W2sdffTRpTqoqwX2rLPOsjVr1vg15v6HrXCkAKBxr732mj8mP3r37k2YLYO2jwqy0/YhzGbHPlQ+bR/CbHbsQ0DhS3SYBQAAQLoRZgEAAJBYhFkAAAAkFmEWAAAAiUWYzRNdHXvppZda8+bNbdKkSX5t8umCu+HDh9tzzz1nrVq18mtLe/DBB+3CCy+02267zaZOnerX7qLHD99000120UUXRc5n8+bN7o4WuovFHXfcYUVFRf6YXXSBny74u/LKK8tclsrWr18/t95XXXWVde3a1davX++P2aW8ZY+z/q+++qo1adLErrvuush56D333HOPm4f+jZpHPmj7aJlUtJ9of8qke023aNGizGXXOmvd92X9y5tHvmnf0XJFLVt5y14R6x9nHvmgZY0qYZW1/uXNA8C+I8zmgUKKguygQYPc/9wOOOAAGzx4sD822bQ+derUcffw1X19o2j9VYL11y3Shg4d6o81V6/36mluetiFbsGmA0FAQUZ3sdA89OALzUPT6JZrAdVpHrq9l+5BrPGqyzctgw58nTt3doH9kksuseOOO862b9/uT1H+su/N+teoUWO3eejqdb0ncx4rV670p8gfbZ8HHnjAffd6cp/uPPL000/7Y71lP/bYY8tc9opY//LmUQhuvPFGt71Uwipj/VesWOHeo7rwPFSfb9oeWp7MEgiWvbz11zpr3aPWX9OWt/7heWg7anuG5wGgYhBmK1kQ1MKtkZdddpndfvvt/lDVoP+5az0zaf0PPfRQf8ij9Vd4CSjot27d2h/y7hmsea1evdoNK+REzSN8kFDYy5yHDirhwJcPeixy2Nq1a926qRU1UN6yx1l/HVTD8+jYsaObR3Cw1sE1cx76EVKIB1rd51PLHoha9iBUBDLXXz+M9nT9NY/w47Ez55FvujXhKaecEhlm46y/1jcsvP47duyw/fffv9T6q07jRC3jUfNQfb4FYTabbMu+J+uvactaf21rbfPMeei7AVCxCLOVTKeGM586pv8pHn/88f5Q1ZAtzCq0RrUiBeuvFiO9T+8PU93AgQPd6+D0epjmcfLJJ7vXW7ZsyTqPN9980x8qHLVr1y55eEecZd+b9Vdo/t73vudO4Uu2eZx44on+UOF4/PHH3Q+cQHnLXtY2jLv+wTwyheeRT2r900Nh9MRDrUd4XeKuv9Y3LLz+em+29Q/mq8+Mmkfmds2HYNm0rFEtxdmWfU/WX9OWtf7a1tnmoe8IQMUp/ZeGnFI/yVtvvdUf8qiVpHr16v5Q1ZDtYHDttdeWaoUOr7/6nOl9mQcgtXiodVF0ajBqHj/5yU/c67Lm8fLLL/tDhUEtNVrW4OEZcZZ9b9e/bt26JY+AjpqHDsQKSIVAy6KiftX6oTNy5Eh/TPSyq+9xsOzZ1l91cdc/mEem8Dzy6frrr3f9iUXhKRwg466/1jcsvP5xwly2QBhelnzRMvzv//6vHXPMMW6ZNTxkyBB/bPZl35P117Rlrb+2dbZ56DsCUHFK/6Uhpy644AJ32jRM/5P97ne/6w9VDdkOBlr/zNOQ4fXXxXB6X3AqL6BWx6Df5Pnnnx85D7U8ysSJE8udRyEItlH4gBhn2fd2/XVKuqx5aDmCeeSblkVF+4tCyb///W9/TPSyq891eeuvurjrH8wjU3ge+aJW/F/84hf+UOkwG3f9w/udhNc/TpjLFgjDy5IvEyZM8F9566KWfXXfCWRb9j1Zf01b1vprW2ebh74jABWn9F8acuoPf/iDa50MU+ucTjVXJdkOBlp/XaEfFl5/nTbV+2bNmuWGA4cccoj16NHDvdadAKLmERzgly9fnnUeasEsBNu2bSvVz1PiLPverr9absuah5YlHJIKxTPPPOMuEvz888/dcNSyqxWsvPVXXdz1D+aRKTyPfPnpT3/q/r6CovCkEoSsuOsfFcSC9c/296u64HOyBcIgzBUSLVd4fbIt+56sv6Yta/21rbPNQ98RgIpT+i8NOaX/2dWqVcs+++wzv8asWbNmBXkA2BfZDgZa/yOPPNIf8qjbRbD+X3zxhbtoQqeNA0uWLNntIJJtHuF+lZo+ah6ZF2DlQ3AaeNiwYX7N7spb9rjrr7slBILvIzyPzAtRdOo5PI9CEaz/jBkz3HCcZc9cf3Xj2NP11/RB9w/JnEe+6G8lWwnEWX+tb1h4/YP9JWr9g79DfV7UPMLLUSjatGnj/r8SyLbse7L+mras9de2zjYPABWLv6pKFlzhqvuwioKNDqqF1pdzXwUHg0y6Iv+www4rOdBGrb8OtNdcc40/5A2rv2dg5syZbt7BPGbPnu3moTslBHR/1fCBRvMIB5V80TJr2XVwzaa8Zd+b9dc9e8uaR/B9heeRL+Fl0AWBWhf9AAxuXxZn2Sti/cubR6FQeMoMkBWx/urfnzkP1QXUj1nv0XslmEe4f3M+aDmCZRL9GPrtb3/r+l8HgmUvb/21zoHM9de05a2/tnl4Htqe+m4AVCzCbB7of6D6n179+vXtoIMOclcWVxUKXlq3zBI+uATrrz5s1apVi1x//U9foV/3E1WQzTxAvvTSS24eOojrFLQ+N0zdFS6++GIXgvR+hb1CuOhCyxveLkEJL3+cZY+7/jVr1nRdOKLmEXxXmofudZw5j3zRMukHT7Ct1Aod3MkiEF7/qGUPfiTty/rHmUch0PKrhFXW+us9em+2eeRDECr196P/x+j1mWeeWerhG3HXX+u+t+sfnoe2Y9Q8AOw7wmyeqMVp8uTJrtWgKtGBJFsJi7P+CxYscPfjDXfJCNPtpjTfzKu2w3TgUN/BzIth8iVzm4RLpvKWvSLWX+8tbx75EN4u69at82t3V976a5217vuy/nHmkW/BdspUWetf3jwqm/pWf/zxxyXbpazlqoz1jzMPAPuGMAsAAIDEIswCAAAgsQizAAAASCzCLAAAABKLMAsAAIDEIswCAAAgsQizAAAASCzCLFCFBPfWXLRokV/jCepzSfPPvHl/PrVv395uvPFGt0y5XncAQP4QZoEqRE8gOuOMM+zqq6/2azwKc3raUS5VxmfEpRvna1n02GAt156GWQXgzKc5AQAKE2EWqEIUwBTEFOSGDRvm16YvzGpZqlevbs2aNdvjICuEWQBIDsIsUIUEYfaee+6xk046ya8tHTSjwlq4Lpi+X79+Jc+3P+2009y4J554wurWrWs1atSwp59+2tVJ8J6+ffvaL3/5S/f6yiuvLPWYz1deecXq16/vxp966qnWqVMnf8yu5de/RxxxhHudTYsWLdzz7n/84x/v9jl6r+YdlGzz0PJefvnlVq1aNTednpsvme9XCeg9WmbVaR1at27tj9m1/s8995x7Fv9BBx1Uav0ff/xx+9WvflUy37LWDwAQD2EWqEKCMLhy5Ur7/ve/by+++KKrD4JWIAiMYeG6YHoN63VRUZGdd955blin7pcsWWIjRoxwnzFt2rTd3nPCCSfYyJEj3bDmee6557rx0rhxY7viiitsyJAhblhBVu8JhoPlv+6662zx4sXuc6IoyGo6fcakSZPsmmuu2e1zgs8ui8a/+uqrtmXLFjes9wQ0LnP7dO7c2S2rwrro38MPP7xkOFj/YLmCZdA6BzR+zJgxtnXrVjfctWtX9y8AYO8RZoEqJAiDwetatWrZzp07S4JWICqsheuC6SdPnuyG5e6777ZDDjnEduzY4deYa50MWlaD97zxxhtuWBRSVTdo0CDbsGGDe92xY0d/rCf8ufpX06xfv94NR9m4caObplevXn6NudbP4HMkCJJl0XiF2TVr1vg1u0Rtn4YNG5aqu/baa10ruATr//rrr7th0TKqTsusHxh6remCMAsA2HeEWaAKUdgKhzh1BXjsscdKglYgKqyF6zKnl8x5S9R7FNzC6tSp41pzp0yZ4sZHlWAeUZ+RSS3Beo/CcZiCtT5HtCzlzUefdcopp7h5KagHrcMSXq9AeHnDJficqPUPAnzQet28eXO3PfS93H///TZu3DhXDwDYe4RZoArJDIPqZrD//vvHCrPHH398Sd2+hNnp06e7YQnCXPfu3d3twvRap9mzifqMTMF8wq3G8qMf/ch9jsQJs4HBgwfb7373OzfPiRMnurqo7aN+wuovnE3U+msZVZd5q7TevXvb7bff7ra5Ws4BAHuPMAtUIVFhUBc23XHHHS5UBdRXUxd2Bd5//303XWYwDYuadzj0Be95/vnn3bDoXq8K0zNmzHDD+ox7773XvQ4Lwl7UZ0TRxWhBK6wohIY/p7wwu3DhQv/VLpo+6LqguyA0bdrUvQ40adIkcp5Bt4uo9dcyBheWZQZa0fTz5s3zhwAAe4MwC1QhUWGwf//+LjSFw+mcOXPsF7/4hbuoS8FNp+j1vooIs7rgSfUqGg7Gi0LnD37wA3dFv06za5ym13sl6jOiDBw40PXf1bTnn39+qc8pL8xq/GWXXebeEyzD6aef7o/1Lu7SPBVgw/PVXRqOO+44+/3vf+/qte2C8cH6n3nmmZHrr/H6HA0/9NBDdsEFF9g555zjxgEA9h5hFqhCFJSC8BQWVa+LkNQSqQuydLeC8DQKXpnTR80j23t69uzpXof7oQbUkqnT7LqtV+Y04fmVZ9asWW7Z1fo7fPhwv9YTtfyZdBeE4PN0T97grgYB1T377LOl5qMfB7oll+p1m7FAEGbV0qp6vTd8r1/Ruup9arHV9FwIBgD7jjALABUgCLMAgMrF/3kBoAIQZgEgP/g/LwBUAIVZFQBA5SLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgos/8PAKXZKAZp7rQAAAAASUVORK5CYII=">
## Tokenizer
Le tokenizer de départ est [BarthezTokenizer](https://huggingface.co/transformers/model_doc/barthez.html) auquel ont été rajouté les tokens spéciaux \<sep\> et \<hl\>.
## Utilisation
_Le modèle est un POC, nous garantissons pas ses performances_
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import Text2TextGenerationPipeline
model_name = 'lincoln/barthez-squadFR-fquad-piaf-question-generation'
loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
nlp = Text2TextGenerationPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("Les projecteurs peuvent être utilisées pour <hl>illuminer<hl> des terrains de jeu extérieurs")
# >>> [{'generated_text': 'À quoi servent les projecteurs sur les terrains de jeu extérieurs?'}]
```
```py
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import Text2TextGenerationPipeline
model_name = 'lincoln/barthez-squadFR-fquad-piaf-question-generation'
loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "Les Etats signataires de la convention sur la diversité biologique des Nations unies doivent parvenir, lors de la COP15, qui s’ouvre <hl>lundi<hl>, à un nouvel accord mondial pour enrayer la destruction du vivant au cours de la prochaine décennie."
inputs = loaded_tokenizer(text, return_tensors='pt')
out = loaded_model.generate(
input_ids=inputs.input_ids,
attention_mask=inputs.attention_mask,
num_beams=16,
num_return_sequences=16,
length_penalty=10
)
questions = []
for question in out:
questions.append(loaded_tokenizer.decode(question, skip_special_tokens=True))
for q in questions:
print(q)
# Quand se tient la conférence des Nations Unies sur la diversité biologique?
# Quand a lieu la conférence des Nations Unies sur la diversité biologique?
# Quand se tient la conférence sur la diversité biologique des Nations unies?
# Quand se tient la conférence de la diversité biologique des Nations unies?
# Quand a lieu la conférence sur la diversité biologique des Nations unies?
# Quand a lieu la conférence de la diversité biologique des Nations unies?
# Quand se tient la conférence des Nations unies sur la diversité biologique?
# Quand a lieu la conférence des Nations unies sur la diversité biologique?
# Quand se tient la conférence sur la diversité biologique des Nations Unies?
# Quand se tient la conférence des Nations Unies sur la diversité biologique?
# Quand se tient la conférence de la diversité biologique des Nations Unies?
# Quand la COP15 a-t-elle lieu?
# Quand la COP15 a-t-elle lieu?
# Quand se tient la conférence sur la diversité biologique?
# Quand s'ouvre la COP15,?
# Quand s'ouvre la COP15?
```
## Citation
Model based on:
paper: https://arxiv.org/abs/2010.12321 \
github: https://github.com/moussaKam/BARThez
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
|
chrommium/sbert_large-finetuned-sent_in_news_sents_3lab | chrommium | 2021-10-11T13:29:58Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sbert_large-finetuned-sent_in_news_sents_3lab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sbert_large-finetuned-sent_in_news_sents_3lab
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9443
- Accuracy: 0.8580
- F1: 0.6199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 17
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 264 | 0.6137 | 0.8608 | 0.3084 |
| 0.524 | 2.0 | 528 | 0.6563 | 0.8722 | 0.4861 |
| 0.524 | 3.0 | 792 | 0.7110 | 0.8494 | 0.4687 |
| 0.2225 | 4.0 | 1056 | 0.7323 | 0.8608 | 0.6015 |
| 0.2225 | 5.0 | 1320 | 0.9604 | 0.8551 | 0.6185 |
| 0.1037 | 6.0 | 1584 | 0.8801 | 0.8523 | 0.5535 |
| 0.1037 | 7.0 | 1848 | 0.9443 | 0.8580 | 0.6199 |
| 0.0479 | 8.0 | 2112 | 1.0048 | 0.8608 | 0.6168 |
| 0.0479 | 9.0 | 2376 | 0.9757 | 0.8551 | 0.6097 |
| 0.0353 | 10.0 | 2640 | 1.0743 | 0.8580 | 0.6071 |
| 0.0353 | 11.0 | 2904 | 1.1216 | 0.8580 | 0.6011 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
juliensimon/autonlp-imdb-demo-hf-16622775 | juliensimon | 2021-10-11T12:46:02Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:juliensimon/autonlp-data-imdb-demo-hf",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- juliensimon/autonlp-data-imdb-demo-hf
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 16622775
## Validation Metrics
- Loss: 0.18653589487075806
- Accuracy: 0.9408
- Precision: 0.9537643207855974
- Recall: 0.9272076372315036
- AUC: 0.985847396174344
- F1: 0.9402985074626865
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-imdb-demo-hf-16622775
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622775", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622775", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
juliensimon/autonlp-imdb-demo-hf-16622767 | juliensimon | 2021-10-11T12:38:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:juliensimon/autonlp-data-imdb-demo-hf",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- juliensimon/autonlp-data-imdb-demo-hf
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 16622767
## Validation Metrics
- Loss: 0.20029613375663757
- Accuracy: 0.9256
- Precision: 0.9090909090909091
- Recall: 0.9466984884645983
- AUC: 0.979257749523025
- F1: 0.9275136399064692
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-imdb-demo-hf-16622767
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622767", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622767", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
mse30/bart-base-finetuned-arxiv | mse30 | 2021-10-11T11:22:28Z | 8 | 2 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:scientific_papers",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: bart-base-finetuned-arxiv
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scientific_papers
type: scientific_papers
args: arxiv
metrics:
- name: Rouge1
type: rouge
value: 13.6917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-arxiv
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2912
- Rouge1: 13.6917
- Rouge2: 5.9564
- Rougel: 11.1734
- Rougelsum: 12.6817
- Gen Len: 19.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6027 | 1.0 | 6345 | 2.4504 | 13.3687 | 5.603 | 10.8671 | 12.3297 | 20.0 |
| 2.4807 | 2.0 | 12690 | 2.3561 | 13.6207 | 5.855 | 11.1073 | 12.594 | 20.0 |
| 2.4041 | 3.0 | 19035 | 2.3035 | 13.6222 | 5.8863 | 11.1173 | 12.5984 | 20.0 |
| 2.3716 | 4.0 | 25380 | 2.2912 | 13.6917 | 5.9564 | 11.1734 | 12.6817 | 19.9992 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
GKLMIP/bert-myanmar-small-uncased | GKLMIP | 2021-10-11T04:59:22Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Huang, Xiuwen
and Cai, Xiaonan
and Lin, Nankai",
title="Pre-trained Models and Evaluation Data for the Myanmar Language",
booktitle="The 28th International Conference on Neural Information Processing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` |
GKLMIP/bert-myanmar-base-uncased | GKLMIP | 2021-10-11T04:58:59Z | 28 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Huang, Xiuwen
and Cai, Xiaonan
and Lin, Nankai",
title="Pre-trained Models and Evaluation Data for the Myanmar Language",
booktitle="The 28th International Conference on Neural Information Processing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` |
suwani/BERT_NER_Ep5-finetuned-ner | suwani | 2021-10-11T03:06:42Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERT_NER_Ep5-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_NER_Ep5-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3553
- Precision: 0.6526
- Recall: 0.7248
- F1: 0.6868
- Accuracy: 0.9004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 288 | 0.3675 | 0.5906 | 0.5854 | 0.5880 | 0.8802 |
| 0.4803 | 2.0 | 576 | 0.3456 | 0.5863 | 0.7371 | 0.6531 | 0.8864 |
| 0.4803 | 3.0 | 864 | 0.3273 | 0.6478 | 0.7091 | 0.6771 | 0.8987 |
| 0.2233 | 4.0 | 1152 | 0.3441 | 0.6539 | 0.7226 | 0.6865 | 0.9001 |
| 0.2233 | 5.0 | 1440 | 0.3553 | 0.6526 | 0.7248 | 0.6868 | 0.9004 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
bsingh/roberta_goEmotion | bsingh | 2021-10-11T00:26:09Z | 992 | 3 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"emotions",
"en",
"dataset:go_emotions",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: en
tags:
- text-classification
- pytorch
- roberta
- emotions
datasets:
- go_emotions
license: mit
widget:
- text: "I am not feeling well today."
---
## This model is trained for GoEmotions dataset which contains labeled 58k Reddit comments with 28 emotions
- admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral
## Training details:
- The training script is provided here: https://github.com/bsinghpratap/roberta_train_goEmotion
- Please feel free to start an issue in the repo if you have trouble running the model and I would try to respond as soon as possible.
- The model works well on most of the emotions except: 'desire', 'disgust', 'embarrassment', 'excitement', 'fear', 'grief', 'nervousness', 'pride', 'relief', 'remorse', 'surprise']
- I'll try to fine-tune the model further and update here if RoBERTa achieves a better performance.
- Each text datapoint can have more than 1 label. Most of the training set had 1 label: Counter({1: 36308, 2: 6541, 3: 532, 4: 28, 5: 1}). So currently I just used the first label for each of the datapoint. Not ideal but it does a decent job.
## Model Performance
============================================================<br>
Emotion: admiration<br>
============================================================<br>
GoEmotions Paper: 0.65<br>
RoBERTa: 0.62<br>
Support: 504<br>
============================================================<br>
Emotion: amusement<br>
============================================================<br>
GoEmotions Paper: 0.80<br>
RoBERTa: 0.78<br>
Support: 252<br>
============================================================<br>
Emotion: anger<br>
============================================================<br>
GoEmotions Paper: 0.47<br>
RoBERTa: 0.44<br>
Support: 197<br>
============================================================<br>
Emotion: annoyance<br>
============================================================<br>
GoEmotions Paper: 0.34<br>
RoBERTa: 0.22<br>
Support: 286<br>
============================================================<br>
Emotion: approval<br>
============================================================<br>
GoEmotions Paper: 0.36<br>
RoBERTa: 0.31<br>
Support: 318<br>
============================================================<br>
Emotion: caring<br>
============================================================<br>
GoEmotions Paper: 0.39<br>
RoBERTa: 0.24<br>
Support: 114<br>
============================================================<br>
Emotion: confusion<br>
============================================================<br>
GoEmotions Paper: 0.37<br>
RoBERTa: 0.29<br>
Support: 139<br>
============================================================<br>
Emotion: curiosity<br>
============================================================<br>
GoEmotions Paper: 0.54<br>
RoBERTa: 0.48<br>
Support: 233<br>
============================================================<br>
Emotion: disappointment<br>
============================================================<br>
GoEmotions Paper: 0.28<br>
RoBERTa: 0.18<br>
Support: 127<br>
============================================================<br>
Emotion: disapproval<br>
============================================================<br>
GoEmotions Paper: 0.39<br>
RoBERTa: 0.26<br>
Support: 220<br>
============================================================<br>
Emotion: gratitude<br>
============================================================<br>
GoEmotions Paper: 0.86<br>
RoBERTa: 0.84<br>
Support: 288<br>
============================================================<br>
Emotion: joy<br>
============================================================<br>
GoEmotions Paper: 0.51<br>
RoBERTa: 0.47<br>
Support: 116<br>
============================================================<br>
Emotion: love<br>
============================================================<br>
GoEmotions Paper: 0.78<br>
RoBERTa: 0.68<br>
Support: 169<br>
============================================================<br>
Emotion: neutral<br>
============================================================<br>
GoEmotions Paper: 0.68<br>
RoBERTa: 0.61<br>
Support: 1606<br>
============================================================<br>
Emotion: optimism<br>
============================================================<br>
GoEmotions Paper: 0.51<br>
RoBERTa: 0.52<br>
Support: 120<br>
============================================================<br>
Emotion: realization<br>
============================================================<br>
GoEmotions Paper: 0.21<br>
RoBERTa: 0.15<br>
Support: 109<br>
============================================================<br>
Emotion: sadness<br>
============================================================<br>
GoEmotions Paper: 0.49<br>
RoBERTa: 0.42<br>
Support: 108 |
S34NtheGuy/DialoGPT-small-cursedryno | S34NtheGuy | 2021-10-10T21:57:32Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
tags:
- conversational
---
# DialoGPT chat bot model using discord messages as data |
Lazaro97/results | Lazaro97 | 2021-10-10T21:48:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.8404
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3793
- Accuracy: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3542 | 1.0 | 125 | 0.3611 | 0.839 |
| 0.2255 | 2.0 | 250 | 0.3793 | 0.8404 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Fiddi/distilbert-base-uncased-finetuned-ner | Fiddi | 2021-10-10T20:08:19Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9290544285555925
- name: Recall
type: recall
value: 0.9375769101689228
- name: F1
type: f1
value: 0.9332962138084633
- name: Accuracy
type: accuracy
value: 0.9841136193940935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Precision: 0.9291
- Recall: 0.9376
- F1: 0.9333
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2412 | 1.0 | 878 | 0.0688 | 0.9178 | 0.9246 | 0.9212 | 0.9815 |
| 0.0514 | 2.0 | 1756 | 0.0608 | 0.9251 | 0.9344 | 0.9298 | 0.9832 |
| 0.0304 | 3.0 | 2634 | 0.0604 | 0.9291 | 0.9376 | 0.9333 | 0.9841 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/fnet-large-finetuned-cola-copy4 | gchhablani | 2021-10-10T19:30:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola-copy4
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola-copy4
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6500
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6345 | 1.0 | 2138 | 0.6611 | 0.0 |
| 0.6359 | 2.0 | 4276 | 0.6840 | 0.0 |
| 0.6331 | 3.0 | 6414 | 0.6500 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
S34NtheGuy/DialoGPT-small-wetterlettuce | S34NtheGuy | 2021-10-10T17:59:38Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
tags:
- conversational
---
# DialoGPT chat bot model using discord messages as data |
mamlong34/t5_small_cosmos_qa | mamlong34 | 2021-10-10T15:37:59Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cosmos_qa",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cosmos_qa
metrics:
- accuracy
model-index:
- name: t5_small_cosmos_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_small_cosmos_qa
This model is a fine-tuned version of [mamlong34/t5_small_race_mutlirc](https://huggingface.co/mamlong34/t5_small_race_mutlirc) on the cosmos_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5614
- Accuracy: 0.6067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4811 | 1.0 | 3158 | 0.5445 | 0.5548 |
| 0.4428 | 2.0 | 6316 | 0.5302 | 0.5836 |
| 0.3805 | 3.0 | 9474 | 0.5614 | 0.6067 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/fnet-large-finetuned-cola-copy3 | gchhablani | 2021-10-10T11:08:30Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola-copy3
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola-copy3
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6554
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6408 | 1.0 | 2138 | 0.7329 | 0.0 |
| 0.6589 | 2.0 | 4276 | 0.6311 | 0.0 |
| 0.6467 | 3.0 | 6414 | 0.6554 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
ThomasSimonini/t5-end2end-question-generation | ThomasSimonini | 2021-10-10T08:30:38Z | 3,055 | 15 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: t5-end2end-question-generation
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: squad
type: squad
args: plain_text
---
# t5-end2end-question-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad dataset to generate questions based on a context.
👉 If you want to learn how to fine-tune the t5 model to do the same, you can follow this [tutorial](https://colab.research.google.com/drive/1z-Zl2hftMrFXabYfmz8o9YZpgYx6sGeW?usp=sharing)
For instance:
```
Context: "Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code readability with its notable use of significant whitespace."
```
```
Questions:
Who created Python?,
When was Python first released?
What is Python's design philosophy?
```
It achieves the following results on the evaluation set:
- Loss: 1.5691
## Use the Model
```
from transformers import T5ForConditionalGeneration, T5TokenizerFast
hfmodel = T5ForConditionalGeneration.from_pretrained("ThomasSimonini/t5-end2end-question-generation")
text= "The abolition of feudal privileges by the National Constituent Assembly on 4 August 1789 and the Declaration \\nof the Rights of Man and of the Citizen (La Déclaration des Droits de l'Homme et du Citoyen), drafted by Lafayette \\nwith the help of Thomas Jefferson and adopted on 26 August, paved the way to a Constitutional Monarchy \\n(4 September 1791 – 21 September 1792). Despite these dramatic changes, life at the court continued, while the situation \\nin Paris was becoming critical because of bread shortages in September. On 5 October 1789, a crowd from Paris descended upon Versailles \\nand forced the royal family to move to the Tuileries Palace in Paris, where they lived under a form of house arrest under \\nthe watch of Lafayette's Garde Nationale, while the Comte de Provence and his wife were allowed to reside in the \\nPetit Luxembourg, where they remained until they went into exile on 20 June 1791."
def run_model(input_string, **generator_args):
generator_args = {
"max_length": 256,
"num_beams": 4,
"length_penalty": 1.5,
"no_repeat_ngram_size": 3,
"early_stopping": True,
}
input_string = "generate questions: " + input_string + " </s>"
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = hfmodel.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
output = [item.split("<sep>") for item in output]
return output
run_model(text)
=> [['When did the National Constituent Assembly abolish feudal privileges?',
' Who drafted the Declaration of the Rights of Man and of the Citizen?',
' When was the Constitutional Monarchy established?',
' What was the name of the Declaration that paved the way to a constitutional monarchy?',
'']]
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5834 | 0.34 | 100 | 1.9107 |
| 1.9642 | 0.68 | 200 | 1.7227 |
| 1.8526 | 1.02 | 300 | 1.6627 |
| 1.7383 | 1.36 | 400 | 1.6354 |
| 1.7223 | 1.69 | 500 | 1.6154 |
| 1.6871 | 2.03 | 600 | 1.6096 |
| 1.6309 | 2.37 | 700 | 1.6048 |
| 1.6242 | 2.71 | 800 | 1.5923 |
| 1.6226 | 3.05 | 900 | 1.5855 |
| 1.5645 | 3.39 | 1000 | 1.5874 |
| 1.5705 | 3.73 | 1100 | 1.5822 |
| 1.5543 | 4.07 | 1200 | 1.5817 |
| 1.5284 | 4.41 | 1300 | 1.5841 |
| 1.5275 | 4.75 | 1400 | 1.5741 |
| 1.5269 | 5.08 | 1500 | 1.5715 |
| 1.5079 | 5.42 | 1600 | 1.5701 |
| 1.4876 | 5.76 | 1700 | 1.5754 |
| 1.498 | 6.1 | 1800 | 1.5699 |
| 1.4852 | 6.44 | 1900 | 1.5693 |
| 1.4776 | 6.78 | 2000 | 1.5691 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/fnet-large-finetuned-cola-copy2 | gchhablani | 2021-10-10T07:23:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola-copy2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola-copy2
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6173
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6192 | 1.0 | 2138 | 0.6443 | 0.0 |
| 0.6177 | 2.0 | 4276 | 0.6296 | 0.0 |
| 0.6128 | 3.0 | 6414 | 0.6173 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
MaryaAI/opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en | MaryaAI | 2021-10-10T06:33:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:syssr_en_ar",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- syssr_en_ar
metrics:
- bleu
model-index:
- name: opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: syssr_en_ar
type: syssr_en_ar
args: default
metrics:
- name: Bleu
type: bleu
value: 7.9946
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the syssr_en_ar dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2046
- Bleu: 7.9946
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 1 | 1.2038 | 7.9946 | 20.0 |
| No log | 2.0 | 2 | 1.2038 | 7.9946 | 20.0 |
| No log | 3.0 | 3 | 1.2038 | 7.9946 | 20.0 |
| No log | 4.0 | 4 | 1.2036 | 7.9946 | 20.0 |
| No log | 5.0 | 5 | 1.2046 | 7.9946 | 20.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/bert-large-cased-finetuned-rte | gchhablani | 2021-10-09T14:14:22Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-large-cased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6642599277978339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-rte
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5187
- Accuracy: 0.6643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6969 | 1.0 | 623 | 0.7039 | 0.5343 |
| 0.5903 | 2.0 | 1246 | 0.6461 | 0.7184 |
| 0.4557 | 3.0 | 1869 | 1.5187 | 0.6643 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/fnet-large-finetuned-qqp | gchhablani | 2021-10-09T08:56:52Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: fnet-large-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8943111550828593
- name: F1
type: f1
value: 0.8556565212985171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-qqp
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5515
- Accuracy: 0.8943
- F1: 0.8557
- Combined Score: 0.8750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:|
| 0.4574 | 1.0 | 90962 | 0.4946 | 0.8694 | 0.8297 | 0.8496 |
| 0.3387 | 2.0 | 181924 | 0.4745 | 0.8874 | 0.8437 | 0.8655 |
| 0.2029 | 3.0 | 272886 | 0.5515 | 0.8943 | 0.8557 | 0.8750 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Subsets and Splits