modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
google/ncsnpp-ffhq-1024 | 0595d4b416a9f0df356d347ffd7e2d965d995d73 | 2022-07-21T15:03:54.000Z | [
"diffusers",
"arxiv:2011.13456",
"pytorch",
"unconditional-image-generation",
"license:apache-2.0"
] | unconditional-image-generation | false | google | null | google/ncsnpp-ffhq-1024 | 79 | 2 | diffusers | 5,100 | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# Score-Based Generative Modeling through Stochastic Differential Equations (SDE)
**Paper**: [Score-Based Generative Modeling through Stochastic Differential Equations](https://arxiv.org/abs/2011.13456)
**Authors**: Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole
**Abstract**:
*Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.*
## Inference
*SDE* models can use **continous** noise schedulers such as:
- [scheduling_sde_ve](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py)
for inference.
See the following code:
```python
# !pip install diffusers
from diffusers import DiffusionPipeline
model_id = "google/ncsnpp-ffhq-1024"
# load model and scheduler
sde_ve = DiffusionPipeline.from_pretrained(model_id)
# run pipeline in inference (sample random noise and denoise)
image = sde_ve()["sample"]
# save image
image[0].save("sde_ve_generated_image.png")
```
Please take a look at [pipeline_score_sde_ve](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py)
for more details on how to write your own denoising loop.
For more information generally on how to use `diffusers` for inference, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
## Samples
1. <img src="https://huggingface.co/google/ncsnpp-ffhq-1024/resolve/main/images/generated_image_0.png" alt="drawing" width="512"/>
2. <img src="https://huggingface.co/google/ncsnpp-ffhq-1024/resolve/main/images/generated_image_1.png" alt="drawing" width="512"/>
3. <img src="https://huggingface.co/google/ncsnpp-ffhq-1024/resolve/main/images/generated_image_2.png" alt="drawing" width="512"/>
4. <img src="https://huggingface.co/google/ncsnpp-ffhq-1024/resolve/main/images/generated_image_3.png" alt="drawing" width="512"/> |
pinot/wav2vec2-large-xls-r-300m-j-roman-colab | 8bdac2ba3a7395b5e0a37a96588e5628abd6e9a1 | 2022-07-28T22:28:51.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | pinot | null | pinot/wav2vec2-large-xls-r-300m-j-roman-colab | 79 | null | transformers | 5,101 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-j-roman-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-j-roman-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2233
- Wer: 0.1437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.3479 | 1.22 | 400 | 1.6349 | 0.3868 |
| 0.8621 | 2.45 | 800 | 1.0185 | 0.2346 |
| 0.5421 | 3.67 | 1200 | 0.7549 | 0.1868 |
| 0.3932 | 4.89 | 1600 | 0.7893 | 0.1811 |
| 0.332 | 6.12 | 2000 | 0.9318 | 0.1919 |
| 0.2902 | 7.34 | 2400 | 0.8263 | 0.1839 |
| 0.2542 | 8.56 | 2800 | 0.8491 | 0.1829 |
| 0.2355 | 9.79 | 3200 | 0.8820 | 0.1805 |
| 0.2206 | 11.01 | 3600 | 0.9183 | 0.1748 |
| 0.2041 | 12.23 | 4000 | 0.9131 | 0.1725 |
| 0.1878 | 13.46 | 4400 | 0.9075 | 0.1699 |
| 0.1733 | 14.68 | 4800 | 0.8456 | 0.1665 |
| 0.1746 | 15.9 | 5200 | 0.9353 | 0.1745 |
| 0.1671 | 17.13 | 5600 | 0.9318 | 0.1713 |
| 0.1641 | 18.35 | 6000 | 0.8804 | 0.1661 |
| 0.1578 | 19.57 | 6400 | 0.9849 | 0.1795 |
| 0.1534 | 20.8 | 6800 | 1.0036 | 0.1637 |
| 0.1484 | 22.02 | 7200 | 0.9618 | 0.1722 |
| 0.1431 | 23.24 | 7600 | 0.9947 | 0.1680 |
| 0.139 | 24.46 | 8000 | 0.9923 | 0.1729 |
| 0.134 | 25.69 | 8400 | 1.0015 | 0.1641 |
| 0.1298 | 26.91 | 8800 | 0.9930 | 0.1704 |
| 0.1253 | 28.13 | 9200 | 0.9977 | 0.1605 |
| 0.1178 | 29.36 | 9600 | 0.9756 | 0.1653 |
| 0.1178 | 30.58 | 10000 | 1.1122 | 0.1784 |
| 0.1165 | 31.8 | 10400 | 0.9883 | 0.1655 |
| 0.1073 | 33.03 | 10800 | 1.1286 | 0.1677 |
| 0.1121 | 34.25 | 11200 | 1.0406 | 0.1660 |
| 0.1081 | 35.47 | 11600 | 1.0976 | 0.1678 |
| 0.109 | 36.7 | 12000 | 1.0915 | 0.1722 |
| 0.1027 | 37.92 | 12400 | 1.1167 | 0.1712 |
| 0.0925 | 39.14 | 12800 | 1.1598 | 0.1693 |
| 0.0913 | 40.37 | 13200 | 1.0712 | 0.1640 |
| 0.0895 | 41.59 | 13600 | 1.1692 | 0.1745 |
| 0.0908 | 42.81 | 14000 | 1.1248 | 0.1641 |
| 0.0905 | 44.04 | 14400 | 1.0523 | 0.1678 |
| 0.0864 | 45.26 | 14800 | 1.0261 | 0.1626 |
| 0.0843 | 46.48 | 15200 | 1.0746 | 0.1676 |
| 0.0759 | 47.71 | 15600 | 1.1035 | 0.1596 |
| 0.0758 | 48.93 | 16000 | 1.0977 | 0.1622 |
| 0.0743 | 50.15 | 16400 | 1.1203 | 0.1677 |
| 0.0826 | 51.38 | 16800 | 1.0983 | 0.1651 |
| 0.0743 | 52.6 | 17200 | 1.1452 | 0.1622 |
| 0.0713 | 53.82 | 17600 | 1.0882 | 0.1623 |
| 0.0651 | 55.05 | 18000 | 1.0588 | 0.1608 |
| 0.0669 | 56.27 | 18400 | 1.1332 | 0.1600 |
| 0.0626 | 57.49 | 18800 | 1.0747 | 0.1562 |
| 0.0646 | 58.72 | 19200 | 1.0585 | 0.1599 |
| 0.0639 | 59.94 | 19600 | 1.0106 | 0.1543 |
| 0.0603 | 61.16 | 20000 | 1.0875 | 0.1585 |
| 0.0551 | 62.39 | 20400 | 1.1273 | 0.1537 |
| 0.0553 | 63.61 | 20800 | 1.1376 | 0.1577 |
| 0.052 | 64.83 | 21200 | 1.1429 | 0.1553 |
| 0.0506 | 66.06 | 21600 | 1.0872 | 0.1577 |
| 0.0495 | 67.28 | 22000 | 1.0954 | 0.1488 |
| 0.0483 | 68.5 | 22400 | 1.1397 | 0.1524 |
| 0.0421 | 69.72 | 22800 | 1.2144 | 0.1581 |
| 0.0457 | 70.95 | 23200 | 1.1581 | 0.1532 |
| 0.0405 | 72.17 | 23600 | 1.2150 | 0.1566 |
| 0.0409 | 73.39 | 24000 | 1.1176 | 0.1508 |
| 0.0386 | 74.62 | 24400 | 1.2018 | 0.1526 |
| 0.0374 | 75.84 | 24800 | 1.2548 | 0.1494 |
| 0.0376 | 77.06 | 25200 | 1.2161 | 0.1486 |
| 0.033 | 78.29 | 25600 | 1.1607 | 0.1558 |
| 0.0339 | 79.51 | 26000 | 1.1557 | 0.1498 |
| 0.0355 | 80.73 | 26400 | 1.1234 | 0.1490 |
| 0.031 | 81.96 | 26800 | 1.1778 | 0.1473 |
| 0.0301 | 83.18 | 27200 | 1.1594 | 0.1441 |
| 0.0292 | 84.4 | 27600 | 1.2036 | 0.1482 |
| 0.0256 | 85.63 | 28000 | 1.2334 | 0.1463 |
| 0.0259 | 86.85 | 28400 | 1.2072 | 0.1469 |
| 0.0271 | 88.07 | 28800 | 1.1843 | 0.1456 |
| 0.0241 | 89.3 | 29200 | 1.1712 | 0.1445 |
| 0.0223 | 90.52 | 29600 | 1.2059 | 0.1433 |
| 0.0213 | 91.74 | 30000 | 1.2231 | 0.1452 |
| 0.0212 | 92.97 | 30400 | 1.1980 | 0.1438 |
| 0.0223 | 94.19 | 30800 | 1.2148 | 0.1459 |
| 0.0185 | 95.41 | 31200 | 1.2190 | 0.1437 |
| 0.0202 | 96.64 | 31600 | 1.2051 | 0.1437 |
| 0.0188 | 97.86 | 32000 | 1.2154 | 0.1438 |
| 0.0183 | 99.08 | 32400 | 1.2233 | 0.1437 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Giuliano/places | 81b8df40e05d96dfad040f54da85a2df0151dea9 | 2021-07-02T18:31:41.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Giuliano | null | Giuliano/places | 78 | null | transformers | 5,102 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: places
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# places
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Beach

#### City

#### Forest
 |
Gunulhona/tbstmodel | 07b6891f8b2c6d855e27d2d660844f7aa8af0330 | 2021-12-29T01:23:20.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Gunulhona | null | Gunulhona/tbstmodel | 78 | null | transformers | 5,103 | Entry not found |
Helsinki-NLP/opus-mt-lg-en | b75b8c1f7d54cec6d83b364581dfef355c191327 | 2021-09-10T13:54:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lg",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lg-en | 78 | 1 | transformers | 5,104 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lg-en
* source languages: lg
* target languages: en
* OPUS readme: [lg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lg.en | 32.6 | 0.480 |
| Tatoeba.lg.en | 5.4 | 0.243 |
|
Helsinki-NLP/opus-mt-pl-ar | 0284a0a28fef5f4a87984e1887a6fd64cdb58604 | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pl",
"ar",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pl-ar | 78 | null | transformers | 5,105 | ---
language:
- pl
- ar
tags:
- translation
license: apache-2.0
---
### pol-ara
* source group: Polish
* target group: Arabic
* OPUS readme: [pol-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-ara/README.md)
* model: transformer
* source language(s): pol
* target language(s): ara arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.pol.ara | 20.4 | 0.491 |
### System Info:
- hf_name: pol-ara
- source_languages: pol
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'ar']
- src_constituents: {'pol'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.test.txt
- src_alpha3: pol
- tgt_alpha3: ara
- short_pair: pl-ar
- chrF2_score: 0.491
- bleu: 20.4
- brevity_penalty: 0.9590000000000001
- ref_len: 1028.0
- src_name: Polish
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: pl
- tgt_alpha2: ar
- prefer_old: False
- long_pair: pol-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-zh-it | 436e288f84955869e282784cfb0af0552f4bed30 | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"zh",
"it",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zh-it | 78 | null | transformers | 5,106 | ---
language:
- zh
- it
tags:
- translation
license: apache-2.0
---
### zho-ita
* source group: Chinese
* target group: Italian
* OPUS readme: [zho-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-ita/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Hira cmn_Kana cmn_Latn lzh lzh_Hang lzh_Hani lzh_Hira lzh_Yiii wuu_Bopo wuu_Hani wuu_Latn yue_Hani
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.ita | 27.9 | 0.508 |
### System Info:
- hf_name: zho-ita
- source_languages: zho
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'it']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: ita
- short_pair: zh-it
- chrF2_score: 0.508
- bleu: 27.9
- brevity_penalty: 0.935
- ref_len: 19684.0
- src_name: Chinese
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: it
- prefer_old: False
- long_pair: zho-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
NhatPham/vit-base-patch16-224-recylce-ft | e9c0a3e0c5bc0717971d554a13a02aec201f6866 | 2022-05-27T07:50:32.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | NhatPham | null | NhatPham/vit-base-patch16-224-recylce-ft | 78 | null | transformers | 5,107 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ## labels
- 0: Object
- 1: Recycle
- 2: Non-Recycle
# vit-base-patch16-224
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1510
- Accuracy: 0.9443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 60
- eval_batch_size: 60
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 240
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1438 | 1.0 | 150 | 0.1645 | 0.9353 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
SEBIS/code_trans_t5_small_code_comment_generation_java_transfer_learning_finetune | 06d544ab76d8a7fc6e79fce7fac4f75c25d72da4 | 2021-06-23T09:57:45.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_comment_generation_java_transfer_learning_finetune | 78 | null | transformers | 5,108 | ---
tags:
- summarization
widget:
- text: "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
---
# CodeTrans model for code comment generation java
Pretrained model on programming language java using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code comment generation task for the java function/method.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/code%20comment%20generation/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 750,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 37.98 |
| CodeTrans-ST-Base | 38.07 |
| CodeTrans-TF-Small | 38.56 |
| CodeTrans-TF-Base | 39.06 |
| CodeTrans-TF-Large | **39.50** |
| CodeTrans-MT-Small | 20.15 |
| CodeTrans-MT-Base | 27.44 |
| CodeTrans-MT-Large | 34.69 |
| CodeTrans-MT-TF-Small | 38.37 |
| CodeTrans-MT-TF-Base | 38.90 |
| CodeTrans-MT-TF-Large | 39.25 |
| State of the art | 38.17 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SkolkovoInstitute/bart-base-detox | d0177c30f5da99f6e5056a498f71201b8bd41c07 | 2022-05-18T18:35:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"transformers",
"detoxification",
"autotrain_compatible"
] | text2text-generation | false | SkolkovoInstitute | null | SkolkovoInstitute/bart-base-detox | 78 | null | transformers | 5,109 | ---
language:
- en
tags:
- detoxification
licenses:
- cc-by-nc-sa
---
**Model Overview**
This is the model presented in the paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/).
The model itself is [BART (base)](https://huggingface.co/facebook/bart-base) model trained on parallel detoxification dataset ParaDetox achiving SOTA results for detoxification task. More details, code and data can be found [here](https://github.com/skoltech-nlp/paradetox).
**How to use**
```python
from transformers import BartForConditionalGeneration, AutoTokenizer
base_model_name = 'facebook/bart-base'
model_name = 'SkolkovoInstitute/bart-base-detox'
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
model = BartForConditionalGeneration.from_pretrained(model_name)
```
**Citation**
```
@inproceedings{logacheva-etal-2022-paradetox,
title = "{P}ara{D}etox: Detoxification with Parallel Data",
author = "Logacheva, Varvara and
Dementieva, Daryna and
Ustyantsev, Sergey and
Moskovskiy, Daniil and
Dale, David and
Krotova, Irina and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.469",
pages = "6804--6818",
abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.",
}
```
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png |
andi611/bert-large-uncased-whole-word-masking-ner-conll2003 | dc687912d610f00c92910c775ec1d31f17134901 | 2021-10-05T16:13:52.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | andi611 | null | andi611/bert-large-uncased-whole-word-masking-ner-conll2003 | 78 | null | transformers | 5,110 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: bert-large-uncased-whole-word-masking-ner-conll2003
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9886888970085945
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-ner-conll2003
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking](https://huggingface.co/bert-large-uncased-whole-word-masking) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0592
- Precision: 0.9527
- Recall: 0.9569
- F1: 0.9548
- Accuracy: 0.9887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4071 | 1.0 | 877 | 0.0584 | 0.9306 | 0.9418 | 0.9362 | 0.9851 |
| 0.0482 | 2.0 | 1754 | 0.0594 | 0.9362 | 0.9491 | 0.9426 | 0.9863 |
| 0.0217 | 3.0 | 2631 | 0.0550 | 0.9479 | 0.9584 | 0.9531 | 0.9885 |
| 0.0103 | 4.0 | 3508 | 0.0592 | 0.9527 | 0.9569 | 0.9548 | 0.9887 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
black/simple_kitchen | 2b141f3569aa0bb39337c4691d5ca9cd8d2302db | 2021-08-19T14:26:04.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | black | null | black/simple_kitchen | 78 | null | transformers | 5,111 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: simple_kitchen
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7222222089767456
---
# simple_kitchen
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### best kitchen island

#### kitchen cabinet

#### kitchen countertop
 |
facebook/convnext-xlarge-224-22k | fc6b9974e8ce68f2880a5ff3589a90556e34e332 | 2022-02-26T12:20:17.000Z | [
"pytorch",
"tf",
"convnext",
"image-classification",
"dataset:imagenet-21k",
"arxiv:2201.03545",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/convnext-xlarge-224-22k | 78 | null | transformers | 5,112 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (xlarge-sized model)
ConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-xlarge-224-22k")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-xlarge-224-22k")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 22k ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
firebolt/llama_or_what2 | eeaf4ce35822e82b6e55ff6d96730444fc41373f | 2021-07-31T19:52:32.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | firebolt | null | firebolt/llama_or_what2 | 78 | null | transformers | 5,113 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: llama_or_what2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.4166666567325592
---
# llama_or_what2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### alpaca

#### guanaco

#### llama

#### vicuna
 |
mmoradi/Robust-Biomed-RoBERTa-SemanticSimilarity | 215a7ecbbfe7d585fb1c4b4c692e88b02656f2a2 | 2021-10-07T10:35:43.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | mmoradi | null | mmoradi/Robust-Biomed-RoBERTa-SemanticSimilarity | 78 | 1 | transformers | 5,114 | Entry not found |
nateraw/rare-puppers | 799e0a1833084ef4db11afadce85b0dedc509a84 | 2021-07-01T18:21:41.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | nateraw | null | nateraw/rare-puppers | 78 | 1 | transformers | 5,115 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9583333134651184
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
nreimers/MiniLMv2-L6-H384-distilled-from-BERT-Base | c6e1ca8b53c20e96e637e276ab9e3d6d754ebfd3 | 2021-06-20T19:01:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nreimers | null | nreimers/MiniLMv2-L6-H384-distilled-from-BERT-Base | 78 | null | transformers | 5,116 | # MiniLMv2
This is a MiniLMv2 model from: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm/tree/master/minilm) |
sentence-transformers/gtr-t5-xxl | 1431e7c1a9f4bf061070434127e055edce49313b | 2022-02-09T11:14:39.000Z | [
"pytorch",
"t5",
"en",
"arxiv:2112.07899",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/gtr-t5-xxl | 78 | null | sentence-transformers | 5,117 | ---
pipeline_tag: sentence-similarity
language: en
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/gtr-t5-xxl
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model was specifically trained for the task of sematic search.
This model was converted from the Tensorflow model [gtr-xxl-1](https://tfhub.dev/google/gtr/gtr-xxl/1) to PyTorch. When using this model, have a look at the publication: [Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results.
The model uses only the encoder from a T5-11B model. The weights are stored in FP16.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/gtr-t5-xxl')
embeddings = model.encode(sentences)
print(embeddings)
```
The model requires sentence-transformers version 2.2.0 or newer.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/gtr-t5-xxl)
## Citing & Authors
If you find this model helpful, please cite the respective publication:
[Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899)
|
lazyturtl/digital | cd165e98717597209f812cc588edd09fe6f79fcc | 2022-03-24T04:28:50.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | lazyturtl | null | lazyturtl/digital | 78 | null | transformers | 5,118 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: digital
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8974359035491943
---
# digital
## Example Images
#### ansys

#### blender

#### roblox

#### sketchup
 |
thapasushil/vit-base-cifar10 | 72f36dc15652adfefe90a6bb97d2100c99ff40cf | 2022-04-12T17:16:29.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | thapasushil | null | thapasushil/vit-base-cifar10 | 78 | null | transformers | 5,119 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
model-index:
- name: vit-base-cifar10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-cifar10
This model is a fine-tuned version of [nateraw/vit-base-patch16-224-cifar10](https://huggingface.co/nateraw/vit-base-patch16-224-cifar10) on the cifar10-upside-down dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2348
- eval_accuracy: 0.9134
- eval_runtime: 157.4172
- eval_samples_per_second: 127.051
- eval_steps_per_second: 1.988
- epoch: 0.02
- step: 26
## Model description
Vision Transformer
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
daveni/upside_down_classifier | 2c08ec90176988347efe236bdacff435171361ab | 2022-04-10T12:08:07.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:cifar100",
"transformers"
] | image-classification | false | daveni | null | daveni/upside_down_classifier | 78 | null | transformers | 5,120 | ---
datasets:
- cifar100
widget:
- src: https://huggingface.co/daveni/upside_down_classifier/resolve/main/meme_upside_down.jpg
example_title: Upside down example
- src: https://huggingface.co/daveni/upside_down_classifier/resolve/main/meme.jpg
example_title: Original example
---
# Upside Down Classifier |
jemole/swin-tiny-patch4-window7-224-finetuned-eurosat | a1f41811e33e1ab6efd30a6a89e049fe151cb48c | 2022-04-24T13:53:19.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | jemole | null | jemole/swin-tiny-patch4-window7-224-finetuned-eurosat | 78 | null | transformers | 5,121 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.975925925925926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0800
- Accuracy: 0.9759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2442 | 1.0 | 190 | 0.1605 | 0.9481 |
| 0.1529 | 2.0 | 380 | 0.0800 | 0.9759 |
| 0.151 | 3.0 | 570 | 0.0681 | 0.9759 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
driboune/skin_type | 08f21724116d7f2f661e03166df00176da24ffb9 | 2022-05-02T08:08:40.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | driboune | null | driboune/skin_type | 78 | null | transformers | 5,122 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: skin_type
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8222222328186035
---
# skin_type
Aiming for fairness in image classification for humans, knowing the skin type of subjects is relevant to make sure the model performs correctly on all skin types.
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### dark skin

#### light skin
 |
ipvikas/rare-puppers | 593bb29831fef62d847eb7f487c39eead3df6b08 | 2022-05-15T12:47:13.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | ipvikas | null | ipvikas/rare-puppers | 78 | null | transformers | 5,123 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9552238583564758
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
Ahmed9275/ALL-94.5 | 9da59ee89aef0f1937aeeb061b7d31e93892a3cf | 2022-05-06T01:39:35.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Ahmed9275 | null | Ahmed9275/ALL-94.5 | 78 | null | transformers | 5,124 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ALL-94.5
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9452415704727173
---
# ALL-94.5
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
AykeeSalazar/vc-bantai-vit-withoutAMBI | 53431cc0db9ae3b3639b691837ab8b79ec5131ac | 2022-05-22T06:14:09.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | AykeeSalazar | null | AykeeSalazar/vc-bantai-vit-withoutAMBI | 78 | null | transformers | 5,125 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
model-index:
- name: vc-bantai-vit-withoutAMBI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vc-bantai-vit-withoutAMBI
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2578
- eval_accuracy: 0.9533
- eval_runtime: 31.6691
- eval_samples_per_second: 130.443
- eval_steps_per_second: 2.052
- epoch: 270.26
- step: 10000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 500
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Vemi/orchid219_ft_vit-large-patch16-224-in21k-finetuned-eurosat | 88895aa12954d1b7de1581a592bd0404aaef56dc | 2022-05-23T02:56:12.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"model-index"
] | image-classification | false | Vemi | null | Vemi/orchid219_ft_vit-large-patch16-224-in21k-finetuned-eurosat | 78 | null | transformers | 5,126 | ---
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: orchid219_ft_vit-large-patch16-224-in21k-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9230769230769231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# orchid219_ft_vit-large-patch16-224-in21k-finetuned-eurosat
This model is a fine-tuned version of [gary109/orchid219_ft_vit-large-patch16-224-in21k](https://huggingface.co/gary109/orchid219_ft_vit-large-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9545
- Accuracy: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.5728 | 0.96 | 17 | 2.1936 | 0.8718 |
| 1.6005 | 1.96 | 34 | 1.2044 | 0.9359 |
| 0.9764 | 2.96 | 51 | 0.9545 | 0.9231 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
promobot/labse-ru | 77e6b16ebb486181db603cac424ce3d3048becf1 | 2022-06-07T06:42:01.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"ru",
"transformers",
"sentence-similarity",
"license:apache-2.0"
] | feature-extraction | false | promobot | null | promobot/labse-ru | 78 | 0 | transformers | 5,127 | ---
language: ["ru"]
pipeline_tag: feature-extraction
tags:
- feature-extraction
- sentence-similarity
license: apache-2.0
--- |
nvidia/stt_de_conformer_ctc_large | 38c37994f4b424d5307f8ad4cc8955154c0914d6 | 2022-07-01T01:46:17.000Z | [
"nemo",
"de",
"dataset:VoxPopuli (DE)",
"dataset:Multilingual LibriSpeech",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2005.08100",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"Conformer",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"Riva",
"license:cc-by-4.0",
"model-index"
] | automatic-speech-recognition | false | nvidia | null | nvidia/stt_de_conformer_ctc_large | 78 | 3 | nemo | 5,128 | ---
language:
- de
library_name: nemo
datasets:
- VoxPopuli (DE)
- Multilingual LibriSpeech
- mozilla-foundation/common_voice_7_0
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- Conformer
- Transformer
- pytorch
- NeMo
- hf-asr-leaderboard
- Riva
license: cc-by-4.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: stt_de_conformer_ctc_large
results:
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
config: de
split: test
args:
language: de
metrics:
- name: Test WER
type: wer
value: 6.68
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Multilingual LibriSpeech
type: facebook/multilingual_librispeech
config: de
split: test
args:
language: de
metrics:
- name: Test WER
type: wer
value: 4.63
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: VoxPopuli
type: VoxPopuli
metrics:
- name: Test WER
type: wer
value: 10.51
---
# NVIDIA Conformer-CTC Large (de)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
| [](#deployment-with-nvidia-riva) |
This model transcribes speech in lowercase German alphabet including spaces, and is trained on several thousand hours of German speech data.
It is a non-autoregressive "large" variant of Conformer, with around 120 million parameters.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-ctc) for complete architecture details.
It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva).
## Usage
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version.
```
pip install nemo_toolkit['all']
```
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("nvidia/stt_de_conformer_ctc_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_de_conformer_ctc_large"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 kHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Conformer-CTC model is a non-autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on the detail of this model here: [Conformer-CTC Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-ctc).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_ctc_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of English speech:
- VoxPopuli (DE)
- Multilingual Librispeech (MLS DE) - 1500 hours subset
- Mozilla Common Voice (v7.0)
Note: older versions of the model may have trained on smaller set of datasets.
## Performance
The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
| Version | Tokenizer | Vocabulary Size | MCV7.0 dev | MCV7.0 test | MLS dev | MLS test | Voxpopuli dev | Voxpopuli test |
|---------|-----------------------|-----------------|---------------|---------------|------------|-----------|------------|----------------|
| 1.5.0 | SentencePiece Unigram | 128 | 5.84 | 6.68 | 3.85 | 4.63 | 12.56 | 10.51 |
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## Deployment with NVIDIA Riva
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
- [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
- [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
- [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) |
Akari/albert-base-v2-finetuned-squad | cc24dc48164a747296f80f831cc9353e2470705e | 2021-12-02T05:36:13.000Z | [
"pytorch",
"tensorboard",
"albert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Akari | null | Akari/albert-base-v2-finetuned-squad | 77 | 1 | transformers | 5,129 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: albert-base-v2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8695 | 1.0 | 8248 | 0.8813 |
| 0.6333 | 2.0 | 16496 | 0.8042 |
| 0.4372 | 3.0 | 24744 | 0.9492 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.7.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
DaNLP/da-bert-hatespeech-classification | 19fdf618a20faf5f8a16cb7c14b02f72d33df94b | 2021-11-15T14:48:18.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"da",
"dataset:social media",
"transformers",
"hatespeech",
"license:cc-by-sa-4.0"
] | text-classification | false | DaNLP | null | DaNLP/da-bert-hatespeech-classification | 77 | null | transformers | 5,130 | ---
language:
- da
tags:
- bert
- pytorch
- hatespeech
license: cc-by-sa-4.0
datasets:
- social media
metrics:
- f1
widget:
- text: "Senile gamle idiot"
---
# Danish BERT for hate speech classification
The BERT HateSpeech model classifies offensive Danish text into 4 categories:
* `Særlig opmærksomhed` (special attention, e.g. threat)
* `Personangreb` (personal attack)
* `Sprogbrug` (offensive language)
* `Spam & indhold` (spam)
This model is intended to be used after the [BERT HateSpeech detection model](https://huggingface.co/DaNLP/da-bert-hatespeech-detection).
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("DaNLP/da-bert-hatespeech-classification")
tokenizer = BertTokenizer.from_pretrained("DaNLP/da-bert-hatespeech-classification")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
|
Helsinki-NLP/opus-mt-ru-sl | 5e8617a9d176263e6617a703ee5c6748ce9a701f | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ru",
"sl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ru-sl | 77 | null | transformers | 5,131 | ---
language:
- ru
- sl
tags:
- translation
license: apache-2.0
---
### rus-slv
* source group: Russian
* target group: Slovenian
* OPUS readme: [rus-slv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-slv/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): slv
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.slv | 32.3 | 0.492 |
### System Info:
- hf_name: rus-slv
- source_languages: rus
- target_languages: slv
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-slv/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'sl']
- src_constituents: {'rus'}
- tgt_constituents: {'slv'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: slv
- short_pair: ru-sl
- chrF2_score: 0.49200000000000005
- bleu: 32.3
- brevity_penalty: 0.992
- ref_len: 2135.0
- src_name: Russian
- tgt_name: Slovenian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: sl
- prefer_old: False
- long_pair: rus-slv
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-rw-en | 61ede64e3ceefce455d41fc85749314baa49a0ee | 2021-09-10T14:02:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"rw",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-rw-en | 77 | null | transformers | 5,132 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-rw-en
* source languages: rw
* target languages: en
* OPUS readme: [rw-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rw-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rw-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rw-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rw-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rw.en | 37.3 | 0.530 |
| Tatoeba.rw.en | 49.8 | 0.643 |
|
Neto71/sea_mammals | fcc3edae3c90e7e2421a7fa5f692c41428dfb475 | 2021-07-05T13:14:43.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Neto71 | null | Neto71/sea_mammals | 77 | null | transformers | 5,133 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: sea_mammals
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8472222089767456
---
# sea_mammals
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### blue whale

#### dolphin

#### orca whale
 |
Sena/flowers | 7a080c96ceca25dcff4cf40f5677242d7f4074fa | 2021-07-04T14:10:45.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Sena | null | Sena/flowers | 77 | null | transformers | 5,134 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: flowers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6041666865348816
---
# flowers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### karanfil

#### leylak

#### menekse

#### nergis

#### zambak
 |
TransQuest/monotransquest-hter-en_any | bf27886d123a86dc6de009c08860b0af880ffb89 | 2021-10-24T18:41:16.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"en-multilingual",
"transformers",
"Quality Estimation",
"monotransquest",
"HTER",
"license:apache-2.0"
] | text-classification | false | TransQuest | null | TransQuest/monotransquest-hter-en_any | 77 | null | transformers | 5,135 | ---
language: en-multilingual
tags:
- Quality Estimation
- monotransquest
- HTER
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_any", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
``` |
airKlizz/mt5-base-wikinewssum-french | 55180f3c443ac6d1c489a3a73757b3086c174093 | 2021-12-24T14:42:37.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | airKlizz | null | airKlizz/mt5-base-wikinewssum-french | 77 | null | transformers | 5,136 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-french
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-french
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0917
- Rouge1: 12.0984
- Rouge2: 5.7289
- Rougel: 9.9245
- Rougelsum: 11.0697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|
| No log | 1.0 | 549 | 2.3203 | 11.5172 | 4.9352 | 9.3617 | 10.4605 |
| No log | 2.0 | 1098 | 2.2057 | 11.8469 | 5.2369 | 9.6452 | 10.8337 |
| No log | 3.0 | 1647 | 2.1525 | 11.9096 | 5.4027 | 9.7648 | 10.9315 |
| 3.1825 | 4.0 | 2196 | 2.1307 | 12.0782 | 5.5848 | 9.9614 | 11.1081 |
| 3.1825 | 5.0 | 2745 | 2.1172 | 11.9821 | 5.6042 | 9.8216 | 11.0077 |
| 3.1825 | 6.0 | 3294 | 2.1012 | 12.0845 | 5.6834 | 9.9119 | 11.0741 |
| 3.1825 | 7.0 | 3843 | 2.0964 | 12.1296 | 5.7271 | 9.9495 | 11.1227 |
| 2.3376 | 8.0 | 4392 | 2.0917 | 12.0984 | 5.7289 | 9.9245 | 11.0697 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
castorini/mdpr-passage-nq | d0edbcc0c8a55011086e4c2ab73a99c412d0a819 | 2021-08-20T15:21:30.000Z | [
"pytorch",
"dpr",
"transformers"
] | null | false | castorini | null | castorini/mdpr-passage-nq | 77 | 1 | transformers | 5,137 | Entry not found |
echarlaix/bart-base-cnn-r2-18.7-d23-hybrid | 89e3e2f99f0ceafaa00ee86e5cff969010b8c5cf | 2021-08-20T09:58:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | echarlaix | null | echarlaix/bart-base-cnn-r2-18.7-d23-hybrid | 77 | null | transformers | 5,138 | ---
language: en
license: apache-2.0
tags:
- summarization
datasets:
- cnn_dailymail
metrics:
- R1
- R2
- RL
---
## facebook/bart-base model fine-tuned on CNN/DailyMail
This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the linear layers contains **23%** of the original weights.
The model contains **45%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method).
<div class="graph"><script src="/echarlaix/bart-base-cnn-r2-18.7-d23-hybrid/raw/main/model_card/density_info.js" id="4348cd46-05bd-4e27-b565-6693f9e0b03e"></script></div>
## Fine-Pruning details
This model was fine-tuned from the HuggingFace [model](https://huggingface.co/facebook/bart-base).
A side-effect of block pruning is that some of the attention heads are completely removed: 61 heads were removed on a total of 216 (28.2%).
## Details of the CNN/DailyMail dataset
| Dataset | Split | # samples |
| ------------- | ----- | --------- |
| CNN/DailyMail | train | 287K |
| CNN/DailyMail | eval | 13K |
### Results
| Metric | # Value |
| ----------- | --------- |
| **Rouge 1** | **41.43** |
| **Rouge 2** | **18.72** |
| **Rouge L** | **38.35** |
|
facebook/wav2vec2-base-10k-voxpopuli-ft-en | 328f7961ee96d2db3af8bbd22c685f5dd96f9692 | 2021-07-06T01:49:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"arxiv:2101.00390",
"transformers",
"audio",
"voxpopuli",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-10k-voxpopuli-ft-en | 77 | null | transformers | 5,139 | ---
language: en
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in en (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-en")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-en")
# load dataset
ds = load_dataset("common_voice", "en", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
indobenchmark/indobert-lite-large-p2 | 14818646c71e898fe71351d12a1dc9fe1bf39ba5 | 2020-12-11T21:45:59.000Z | [
"pytorch",
"tf",
"albert",
"feature-extraction",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"transformers",
"indobert",
"indobenchmark",
"indonlu",
"license:mit"
] | feature-extraction | false | indobenchmark | null | indobenchmark/indobert-lite-large-p2 | 77 | null | transformers | 5,140 | ---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT-Lite Large Model (phase2 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-large-p2")
model = AutoModel.from_pretrained("indobenchmark/indobert-lite-large-p2")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
lincoln/barthez-squadFR-fquad-piaf-question-generation | 0e0ece03c9519ae7ae097714d10e8a203b34b34d | 2021-10-11T15:24:58.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"fr",
"dataset:squadFR",
"dataset:fquad",
"dataset:piaf",
"arxiv:2010.12321",
"transformers",
"seq2seq",
"barthez",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | lincoln | null | lincoln/barthez-squadFR-fquad-piaf-question-generation | 77 | 2 | transformers | 5,141 | ---
language:
- fr
license: mit
pipeline_tag: "text2text-generation"
datasets:
- squadFR
- fquad
- piaf
metrics:
- bleu
- rouge
widget:
- text: "La science des données est un domaine interdisciplinaire qui utilise des méthodes, des processus, des algorithmes et des systèmes scientifiques pour extraire des connaissances et des idées de nombreuses données structurelles et non structurées.\
Elle est souvent associée aux <hl>données massives et à l'analyse des données<hl>."
tags:
- seq2seq
- barthez
---
# Génération de question à partir d'un contexte
Le modèle est _fine tuné_ à partir du modèle [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) afin de générer des questions à partir d'un paragraphe et d'une suite de token. La suite de token représente la réponse sur laquelle la question est basée.
Input: _Les projecteurs peuvent être utilisées pour \<hl\>illuminer\<hl\> des terrains de jeu extérieurs_
Output: _À quoi servent les projecteurs sur les terrains de jeu extérieurs?_
## Données d'apprentissage
La base d'entrainement est la concatenation des bases SquadFR, [fquad](https://huggingface.co/datasets/fquad), [piaf](https://huggingface.co/datasets/piaf). L'input est le context et nous avons entouré à l'aide du token spécial **\<hl\>** les réponses.
Volumétrie (nombre de triplet contexte/réponse/question):
* train: 98 211
* test: 12 277
* valid: 12 776
## Entrainement
L'apprentissage s'est effectué sur une carte Tesla V100.
* Batch size: 20
* Weight decay: 0.01
* Learning rate: 3x10-5 (décroit linéairement)
* < 24h d'entrainement
* Paramètres par défaut de la classe [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments)
* Total steps: 56 000
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAj0AAAGOCAYAAAB8J7JHAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAEKXSURBVHhe7d1/sB11fcf/zHSmihVDKagFpqQDrVUcQptaf9Q2acf6ox1M1NKpVkpGWrRaJ3FqW6f/JPX7HR0pNSkWKUMRqEz6bRSj8kMRh8QREWyQYtFCgw0Uwy8hKYIKWrvf+9y77+Rz9+45d2/u+bHnfJ6Pmc/cuz/Ont09e3Zf57Of3V1WSJIkZcDQI0mSsmDokSRJWTD0SJKkLBh6JElSFgw9kiQpC4YeSZKUBUOPJEnKgqFHkiRlwdAjSZKyYOiRJElZMPRIkqQsGHokSVIWDD2SJCkLhh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JGm0I9+9KPioosuKu64446qz9Lt3Lmz2Lx5c/l31B5//PHi0ksvLd+fMizDnn4bXZiHQZmmZdF0MPRo4nEQXrZs2VgOxl31gx/8oFwnf//3f1/1aS/CTd041/OqVauKFStWFGvWrCnLsAx7+qlNmzYVe/furboO4f2nJSiwvRh61CWGHk08Q898Swk9HIx5bV2EoVGv50996lPl/Hz5y1+u+gzPKENPr212mkIPy2HoUZcYejTxFht6brvttuLAgQNVVzPG2bVrV9U1H7/QGd70S30hgwwNzGeTYYSehfSalzrWWdtxwUFzoflpOz3G6zduPfTwWX3rW9+quuZjO2I7WGh7Ckwvtpm2oYdxHn744aqrGdM8nG2xrcVMP11GqWsMPZp47GTbhJ44mEc57bTT5u2cN2zYUBx11FFzxjvrrLOqobMHTU6zpMPpXggHxXXr1s15Hd1xsGQ+6Ee7lbqY7xiXv+vXr58zLQ6U6YG3KfSsXr26LHX04z1QX0dR0Gs9s87ScZvWK/0ZL10HrOcdO3ZUYzRjuWL8KPH+vAfvlQ6r1yrEPKefW9M6CBF6tm3bNudzfte73lWNMatpO2Be6oEq1nm6Xtme0tdFieWK0EM5+uijDw4///zzy+Eptpd0e+X/9PPp9XlS4jPvh2nVp1//zJgOy5jOS0yb/+ufiTROhh5NvDiwpTv7utj5b9mypQwHHJxWrlxZHrgiLNAvxgn0S3fyy5cvLw9a8RoOvE1Bpe7EE08s3y/mkb/0IwQEDhwcOOsYLw1e/M98xPsyf3RzsAyHG3oQ66quaT3X1yvDYr2mGIcDIuOzziiMR780rNUxPQ6a8b5RkK5TpsE8MF66LAyjH+OyvhivHshSEXpOOeWU4pZbbikeffTRg++fHrzjc495T7enFOuWZUznM94/lqmO9z/99NOLjRs3ltNlHPr9xE/8xJx55/2ZRqx7SgSqGI+/vD4tzBPjMO1+4vvANJkOJabPdALrO5Yxvivx/oxr6FGXGHo08dgB13fEdRFWUrFTj5ATB/A4kDVh+EK1E3VxcIoDQWA6af+m8WLZ0oMJ3emBHfHaGG8UoYf1RHd9vcZ4zFOgu/7e9en1EqEjFQGn/to4KId4j/r66oVwwfgf+chHqj6zTj311OK4444rnnzyyarPfLE9pfMUAaP+2aM+bmAenvvc5855zfbt28vx+RvYpqk9qyPgNfVHbCfpZ9PL2rVry/eoY/oMC7G99FpGQ4+6xNCjiRcHtqYDCJoORiENQxEo+LW+devWxl/CUTvBr/BPfvKTVd/+4pQZO/+0MI10vggRzE96gGbeOMiEfstK/3jtKEJP23lBvRsRmppen2JdMV5qEPPYJEJPvQ3NO97xjrL/N7/5zarP7LT5DHlNFMZJQ3GvdY5e88V0OH1Zx/isC8S2ynjpNkVh+216z1gX6efw3e9+t7j++uvnFLYdsK2n4SbU1z3dTeEI6TxLXWDo0cRb6MDWb3j9oETQIWiwE+c1HEDSgxgHakIMQYThbXbqTJ/pxXvVSxqueO84RcJ7EZbSX+1Rw8GwOvrHAW3coad+wEznLdXr9SnWb31+mOemU4G95rGtCC919enG58DnxXKxjcQ46XL2WudIp5fi/Zu2KcaP/vFerOd4j7TUa3rYxtiWIuCHPXv2lNNJy/79+8th/N/0mdW3D7p5zyaMZ+hRlxh6NPHqB6S6+FWchpdA/16nAjhQRM1OE6YbB4B+pwuipqeNdFniVER62qDXskatSRykFhN66rVL9YNaqL93dPdar+k0690hnV4vTaGn1zwyL+k0Yx7bitqaXjU99957b9nddAqJ7YVx0uXstc6RzmeqTeiJbbrNaSq2DYI023Ld//3f/xU//OEP55TQa95Z7rRmh+Xtt4yGHnWJoUcTLw5s/Q6eHKTqv+Djdf0OHHEQTYNHHcObDughptEUDpowr/wi50BSP1AxHxxw6r/Yo+Yhao2aQg/zGLVIIdZBOv8Rtuqa1nPTvMTr0+Wtv0eoT69JU+iJdVr/7Fhn6QE55rmtCD1NbXq4QWJgnPry0F3v3ys4gHGjPVmqTehB0zZdR+ChRoztqKl2sB8+V8J6+jr+r9cYsbyGHk0KQ48mXhzYmto3xA43DpJcLcV9VWizw847DRUcgJgGbXUYh79xwADvw0GG1zKcEpdg9wtF4CDBeLQBidcynaZTNBxEmDfGbwpkcXCNabGMdKc1D02hJ9ZTug44cFLSA3XUIjBeug7j9fwNEXDq67V+EGSc9D1CfXpNYvnqeA/eKz6P+CzSsBXz3BafL+U5z3lOeYn4VVddVZx55pnlNHisR4hAcNlll5XvzWfBdsJ46XL2Cz2MTwiNdRzbUNvQE8vG+DEf/GUbjnlgm2Cc9LOMstB6J+AQINlGmW58H+iXbu+8l6FHk8LQo4lH7UYcXJpK4GBINzttDjgcENJfsRwE4ooVdtZR4xLjsKOnOw5ujMf4Cx08AqGK9+e1FP5vCgK8T8x7On8pwkZMi/mpT4fTFL/+679efOITn6j6zOJ1Mf+8nnXHeqiHq1gXMR/RjwNsfXljvca8pOErMLwpwMU89MNBk/dtwnuly5MGHsS20RbvQ/n6179enHHGGeV0qeW55JJLqjFm8bmwLcS2wrqKzy1dTuavaX2A8RnGa9L1wPs3BYWm/kyD92ZbZT5im41ppdOvl6bPoy6dfmzv9c+L6fRaxl7LIo2LoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPZIkKQuGHkmSlAVDT4Mbb7yxOPLII4uXvvSlFovFYrFkUZ71rGfNuaHpNDL0NLj11lvLu61ec801FovFYrFkUV74wheWd9+eZoaeBv/xH/9R3oZekqRccBd3HjcyzQw9DQw9kqTcGHoyZeiRJOXG0JMpQ48kKTeGnkwZeiRp8b773e9aOlwWYujJlKFHktr54Q9/WOzbt6+48847i2984xuWDpf//M//LB544IHqk5vP0JMpQ48ktfPII48Ud911V/Hoo48WP/jBD8oQZOleeeqpp8rPivDz+OOPV5/eXIaeTBl6JKmde+65p7j//vurLnXd3r17e35ehp5MGXokqR1qeQ4cOFB1qeseeuihMvg0MfRkytAjSe0YeiYLoYfauSaGnkwZeiSpnUkLPRz0//Vf/7Xqyo+hR/MMOvRceunMip5Z02vXVj0kaUpMWuj56Ec/WrzgBS+ouvJj6NE8gw49O3fOhp7Vq6sekjQlDD2TxdCjeQw9ktQO9+eZ9NBDw97NmzeXZSc77JpLL7304HCeQp4ub31Y1xl6NI+hR5LamfTQs2PHjuKoo44qNmzYUGzatKlYvnx5+X/g/5UrV5bDKGvXrj0YjFbP7NTrw7rO0KN5Bh16brttNvSsWFH1kKQp0XR666UvHW15zWuqN26hHnpOPPHEMrCE22Z22Mtmdtj8Bf831f6g37CuMvRonkGHHhB6KJI0TZpCz5FHHtrnjaIcbuhhvtOAE6i92bJlS/k/tTkrZn6xbt26tXG80047rXFYVxl6NI+hR5LaaQo9t9xSFDfdNLryjW9Ub9xCGnqiVqc+/wSdqP1hGAGIU1ec+iLkxPgxjPGZzpo1a+ZNq2sMPZrH0CNJ7Uz61VuEFdr1pAg39X5gOeunw0IMixqirjL0aJ5hhp4J2jdI0oImPfScddZZc2pv6Ca8RDdXZ8X/XOXFqS76oWlYU1jqEkOP5hlG6OHKLULPhLV5k6S+Ji30XHnllXNCD/POFVrU+FA4VZW2z4lTV1HSK7to0xP961d9dZWhR/MYeiSpnUkLPbkz9GgeQ48ktWPomSyGHs1j6JGkdgw9k8XQo3mGEXo41UvoaWj0L0kTy9AzWQw9E4DLA2ldHw3GFoMW9dxifDGvG0boIewYeiRNG0PPZCH0cFxsYujpCC4LpHD/g8WGHlreRwv7tgw9ktSOoWeyGHomCM84WUx4iTtlUlNk6JGkwTP0TBZDzwRZTOjhQ+UGU/ztQujhXlbMwgQ8hFeSWjP0TBZDzwRZTOihhiduB75Q6Ln++uuLZz/72QfLT/3UT5U3mhokrtpiFriKS5KmhaFnshh6Jkjb0BOntcJCoeeJJ54o7rzzzoPl2muvLcPPIBl6JE0jQ89crIvf/d3fLfbv31/16Y3xPvGJT1Rdo2HomSBtQw+Bh2eg8MRbCv/zOv5v8/j/YZzeMvRImkaGnrnuv//+8nizb9++qk9vr3zlK4v3ve99VddoEHq8ZH1CtA09XOlF7U4UQhCv4/9eCTc1jNDD2zLrM/lLkqaGoWcuQ0+3TUToIexs3ry5WL9+fbkx8T8l7Nq1q3j6059e3HvvvVWfuRY6vVU3jNADZmERsyFJnTeJoYdjCjX/HBc4E7B169ZqyOxT1utPSmf8tdVVKPxwXrduXflaCveQY3hYSuhJp818bdy4sRoyi2NZ3HeOv+kDTvsNSxl6JgA1N9TW1Ev493//9+J1r3td8fDDD1d95orXt2XokaR2GkPPrbcWxVe+MroyMw9t0cSBYMBxAQQWrvSNbsICgShF4IkQQTBJQxFtSLnwJdbB4YYeXk/QIXTxf8wXYQbMN+8TASvGQYwbZzLSYXWGHs1j6JGkdhpDz5FHHtrhjaK85jXVGy8sDTAhvfiF4EBoSQME3U3tQRmHMw2ElQhChxt6CF0ElxT9qLUBIYb3iflK8d4Mm/c5NDD0aJ5hhR6uguf72WK7lKSJ0Bh6XvSi0ZZFhB7u0E9AiAtdKJyiol9gnKhhIRDRHVhWXkOtS5x14P8Y/3BDD6+vn5Eg6DCtWL8ENrqZX5p4RH/+8lqGMW/psDpDj+YZVuhheyb09Kh1lKSJM2lteggHcQ+3XqhhiRBE4EnHp5Yo2veENCQNI/SkqOlh/hg3DWroNywYejSPoUeS2pm00ENooaakH5aHsEHY4W+6fASKCDggaDDOUkNPtDVKT18xzfoprxDvm44fmsJSMPRoHkOPJLXDDV0nKfQwr9TMxGkgClcG12t/aFBMcKjX6kQ7G17HVV9MK21wfLihB7xXnLriyi2mE22FmD7zybDLLrusvMorTrvFMOaHwrLFsDpDj+Yx9EhSO5MWegIhh1ofCv/Xa0yoeSFMNDVgJvgQihjO6+iOq6W4w/+5555bPP7442V3P9u3by9uvvnmqmsW02Ke6u/N+9TnOdZ7v2F1hh7NM6zQM7MNl6GHv5I0DSY19OTK0KN5DD2S1I6hp7cbbrih+PjHP95Y2tQEDYOhR/MYeiSpHUNPb29961t7lgcffLAaa7QMPZrH0CNJ7Rh6JouhR/MMK/TQCJ/QU7sYQJImlqFnshh6NM+wQg+N+wk9XMUlSdNgz549xbe//e2qS133rW99q+fDuQ09mTL0SFI7tE35r//6r+J///d/qz7qqqeeeqo8vu3fv7/qM5ehJ1OGHklqh/vSsM/8xje+UdYgWLpZOKXFZ8QdtH/4wx9Wn95chp5MDSv0cO8rQk+PR6JI0kT60Y9+VDz22GPFAw880Fi4S/GgStP0R1ma5mmUpWme2paFLpM39GRqWKEHhB6KJEldYujJlKFHkpQbQ0+mDD2SpNwYejI1zNCzfPls6PG2FpKkLjH0ZGqYoccnrUuSusjQkylDjyQpN4aeTBl6JEm5MfRkytAjScqNoSdTwww9PmldktRFhp5MGXokSbkx9GTK0CNJyo2hJ1OGHklSbgw9mRpm6NmxYzb0rF1b9ZAkqQMMPZkaZujhqi1CD1dxSZLUFYaeTBl6JEm5MfRkytAjScqNoSdTwww9e/fOhp4VK6oekiR1gKEnU8MMPSD0UCRJ6gpDT6YMPZKk3Bh6MmXokSTlxtCTqWGHnuXLZ0PPgQNVD0mSxszQ0xEHZtLBrl27is2bN5eljSuvvLI4++yzi9e97nXFhRdeWNx9993VkIUNO/T4pHVJUtcYejpi9UxKWL58ebFy5cqZsLDwLBOMTj755GL9+vXFxo0bi9NOO6183Y033liN0Z+hR5KUG0NPR1DTg50zKaFN6Ln22mur/2bdeuutxdOe9rTibW97W9WnP0OPJCk3hp6OaRt6mpx66qllzU8bhh5JUm4MPR1zuKHnuuuuK193ySWXVH3muu+++4pLL730YHnf+95XHHPMMdXQwfNJ65KkrjH0dMzhhp4Xv/jFfRtA7969u1i7du3B8pu/+ZvFT/7kT1ZDB8/QI0nqGkNPxxxO6DnllFNaX/EVhn16y9AjSeoaQ0/HLCb0PPjgg8WP//iPF6961auqPu0ZeiRJuTH0dEyv0EObnL/7u78rvvOd75TdXK11wgknFG95y1vK7sUadujZsWM29KxdW/WQJGnMDD0dsWnTpjLs1Eu44YYbyu577rmn7OZ0Vn1cypo1a8rhCxl26JnJbjPzM3sVlyRJXWDo6Yi9e/eWtTz1Eh577LHipptuKp588smyu2ncKG0YeiRJuTH0ZMrQI0nKjaEnU8MOPXv3zoaeFSuqHpIkjZmhJ1PDDj0g9FAkSeoCQ0+mDD2SpNwYejJl6JEk5cbQk6lRhJ7ly2dDT/UAeUmSxsrQk6lRhB6ftC5J6hJDT6YMPZKk3Bh6MmXokSTlxtCTKUOPJCk3hp5MjSL0+KR1SVKXGHoyZeiRJOXG0JMpQ48kKTeGnkwZeiRJuTH0ZGoUoWfHjtnQs3Zt1UOSpDEy9GRqFKGHq7YIPVzFJUnSuBl6MmXokSTlxtCTKUOPJCk3hp5MjSL07N07G3pWrKh6SJI0RoaeTI0i9IDQQ5EkadwMPZky9EiScmPoyZShR5KUG0NPpkYVepYvnw09Bw5UPSRJGhNDT6ZGFXp80rokqSsMPZky9EiScmPoyZShR5KUG0NPpgw9kqTcGHoyNarQ45PWJUldYejJlKFHkpQbQ0+mDD2SpNwYejJl6JEk5cbQk6lRhZ4dO2ZDz9q1VQ9JksbE0JOpUYUertoi9HAVlyRJ42ToyZShR5KUG0NPpgw9kqTcGHqmAAHm+9//ftXVzqhCz969s6FnxYqqhyRJY2Lo6YhLL720WLNmTXHUUUfNhIR2s3zHHXcUL3nJS8rxjzzyyGLz5s3VkIWNKvSAxWm5SJIkDY2hpyM2bdpUli1btrQOPS960YuKM844o9i3b19x8803l8Hnoosuqob2Z+iRJOXG0NMxO3fubBV6br/99nK82267repTFO985zuLVatWVV39GXokSbkx9HRM29Czffv24ogjjqi6ZsVrqflZyChDz/Lls6HnwIGqhyRJY2Do6Zi2oeeDH/xgceqpp1Zds+K1u3fvrvoc8oUvfKF4/vOff7CcdNJJZfuhUfBJ65KkLjD0dEzb0EPbn16h59/+7d+qPofs37+/+NKXvnSwbNu2rTj22GOrocNl6JEkdYGhp2Pahp4dO3b0PL317W9/u+rT2yhPbxl6JEldYOjpmLah57777ivH47L1wCXrr3jFK6qu/gw9kqTcGHo6Yu/evcWuXbuKrVu3lmGG/ymBU1LHH398GXbCS1/60vKSdV7L8Gc+85nFJZdcUg3tb5ShxyetS5K6wNDTEdyccPXq1fNK+OpXv1r82q/9WvHggw9WfWZvTviyl72sDDs0TO7qzQkNPZKkLjD0ZMrQI0nKjaFnSGibc6DDN6Yx9EiScmPoGQBOTXEJeVi5cmXZLof74BB+umiUoWfHjtnQs3Zt1UOSpDEw9AzA2pmjOZeQg7/Lly8va3kIQgzrolGGHnIfoSdpoiRJ0sgZegaABsdRo7Nhw4birLPOKv8n+Jx44onl/11j6JEk5cbQMwCEHJ6QDkIOp7vAw0ANPYYeSVI3GHoGgPvkcEqLdjy054kGzJ7emjWzesrQs2JF1UOSpDEw9AwIQad+xRY1PQSiLhpl6AGhhyJJ0rgYeoaA4MPdlLsaeGDokSTlxtAzAJzGisbLiEvWKXFVV9eMOvQsXz4beqrmTpIkjZyhZwC4estL1vtj9RB6jjqKmrCqpyRJI2ToGYB+l6wTgLpo1KEHXL1F8Fm3ruohSdIIGXoGIC5ZJ+SsWLHCS9Z7oIlTnObq6Fk/SdIUM/QMgJestzezSjzNJUkaC0PPANWfs+Ul683iNFfS9luSpKEz9AwYwYfL1aO2p6vGGXrS01y1nChJ0tAYegaENj08VT0uVaesX7++Gto94ww94KkdhB7u0uxpLknSKBh6BiAuU48GzKDGh/Y9XM3VReMOPZhZPWXw6egqkiRNGUPPAHD1Vhp4AsHntNNOq7q6pQuh57bbZkMPxdNckqRhM/QMAFdoNYUeGjJT29NFXQg9SE9zSZI0TIaeASDwcH8eQk7gqi1qeTy9tbA4zUXwscZHkjQshp4BIdxEA+Zo0Mydmrt6FVeXQg9ZkXs4Enwo3LG5o1f6S5ImmKFngKjdoVEzJa316aIuhZ7Aqa64lJ2bF27dWg2QJGkADD1DRENmanu6qIuhB9TwcBPrqPWhHbinvCRJg2DoGSJDz+Ej6KSnvDZurAZIknSYDD1DZOhZGppDxdVdlDVrvJGhJOnwGXqGyNAzGDSPirY+nO7qeHMpSVJHGXqWgFDDc7Z6la1bt+YTeqiC2bx59rzUEDD5uLSdRs5DehtJ0hQz9CxBXKLer2QTeuI8FOeghoTgw5PZeRtKw/0gJUnqydCTqaHU9ETL4yGnkbSdT4ef6SqpLfYfHIiGWFsswdCTqaG06SHskERG8EwJ3ira+QyxcknSsNA4j5tx8QWOXzFRuEMpQUgaMENPpobWkHlEtT1IGzhv2VL1lNRdfGmpnuWHURpyKDQF4LE96R1Kd+yoXigNhqEnU0MLPeykYoc1gl9q1ISP8O0kHS4CD1/UCDn8QKKRHvuM9MvLHUoJQDEeNUE+l0YDYujJ1NBCD2KHReObERjx20laLEJN1O7whW1z34n0HDZhyefSaAAMPZkaaugZcfVL+nb+IJQ6hn0AN9jiS8p9JxazT2Dc9Lk0BKfLLhvJfkXTydCTqaGGHoy4+iUuZeevpI5YSuBJ8csmfS4Nv3BoG+SVXlokQ0+H3H777cUll1xSXHTRRcX1119f9e3tO9/5zsx3fmdx3nnnFdu2bSvuuuuuasjChh56qL6OHdQIql94ixG+naQ2CCZ8KTlNNYjaGU55pe19KNT+cOrL2h+1YOjpiB07dpQhZP3MTuKcc86Z+S4vK7Zv314NbfaqV72qePWrX138+Z//eXHmmWeWryEEtTH00IMRV7/E23kJu7RE/HLYtavqOExp4GnThmcxmD+u9EprfyjW/GgBhp6OWDNzpN7Mjbkq/L9q1aqqa77HHnts5ju+rLjjjjuqPkXx2te+tpxOGyMJPSOufuGHXrR7dN8nHQa+pxFWKNSibNy4+NBCIOH1wwg8dVz9FbU/tP+R+jD0dMCTTz45831dVtb2hL0zOx/6XX311VWfuR599NFyeFobRM1Pp0IPYuc3ouqXETwNQ5o+/GJIww6lXotC2xxOI/X6AUO4oXaIceI1yT5tqJj/eM8R/MDS5DL0dMDdd989811dVuzZs6fqM4t+tO/pZcuWLcUb3/jG4mMf+1jx3ve+t3j5y19e3HDDDdXQuR566KHi05/+9MHy4Q9/uDjmmGOqoUM04uqX9O1Gtb+VJhZfGGqY0/vncJ44ggNBJr1hYBQCEL8smm4yGGUENyidI67y8k6l6sPQ0wG0wyHg1NEvPeVVd8UVVxTHH3/8zP7ntOLII48szj777DLcNLnpppvKUBSFU2dHsaMbhah+YUfJTnbI2NfyduyPJfXQL+w04VdENJxrKlydxWkmwsc4fnHwnsyHX3z1YejpgDiV1VTTc/nll1ddc9028wuM4buqxoYPP/xw8eY3v7l4wxveUHYvZGSnt5BWv7CTHUGNzwifhiFNnvhlQCGoLOY7yfeZ8SnDbq+zWPHFH8E+RpPJ0NMRBJhrrrmm6iqKffv2lf0+//nPV33mogao3n4naozaXME10tADfkGml5rSOHKItT7xo4+MNYLKJWmysO/gCzJtp4KiDeGIrhjV5DH0dMQZZ5wx51QW/3MaKjz44IPl6awnnnii7P7Upz41891eVuzevbvsRlzq/u1vf7vq09vIQ0+IU10UqqGH+IsszVjs41m9bOu2c1TW+M7xpaD2ddp+EfDlZtn8taMeDD0dcdVVVxXHHXdcGX7injvplVk0UKbfPffcU/Upil/5lV8pVq5cOfPjZkOxbt26mX3Y8uIDH/hANbS/sYUeUCXO+X92TpQk7A0Sb1O/ACUK+8SZVVbe0V7KSjT4ndaH1cWvHc9tq4Ghp0O+9rWvFRdccEFx/vnnF9ddd13Vd9YjjzxSBqPvfe97VZ9Zn/3sZ4tzzz23uPjii+fU+ixkrKEnpLU+NHIeUq1PNEHg7dgf1i9EocLJ8KMsRE0IZVprQqK9EvsUqcbQk6lOhB6QRtLqGM5DDfGUV2Dfz74xfWvDj6ZeXH01zW1eCHPxy8Zz2aox9GSqM6EH7KSohkmrYEYUfmD4URb4nsUl6tMeBiLc0bBZShh6MtWp0BM6GH54+yi0AaL5URS+N9N6hkBTKE4nc4532tGgj2Ul5EkJQ0+mOhl6QlP44fx8v1vgD1A9/CxUmDWuwF/q8xmloYpanhH9iBi7+BJ7a3YlDD2Z6nToCU3hJ1LGCAIQx4a0sO9kdqKkl8SnhVohZk/qjGjcy1WTueAeRCzzYh9Cyn6FXzAUDo5U6/qLZmoYejI1EaEnReLgPH09AMV152Nso0AgoulAehU+xR+Y6gzO1bJREn5ywY+m+DL22j/QnwetRi3YQoVfNIagiWboydTEhZ4UO+6410ha4jzTGDfoqJxidjjO2OZHY0cqZ4PkdE9uej2ElC8m4SX2HWlhPVGNS+H1Tb9ookQI4ofXUoMQ88Q0mBbTZF8WDQqjsI9L35txGL9rjwPpMENPpiY69AR2EhGA6jVAsVPgPNMYdgixj/TiEY0d3wM2xhyfPh7Po+EXSCBQpDU71CC3qSlmf8P0+oUgSlwBQSDhvfoVapni8xlEqb9vTL9eRtQ+sosMPZmaitBTxy9aqlmadkjs5NgB8KtoBNUvcfEIxR9hGpvYEPlRMILtvpOiQTMhIE7zUajJYZ9xuCIEsc/hh1e/INS2RO1SNBxk/tISOxPem+5478VceZGWhdpH8j7UPrHuaEqQBif2pxGuokSNV4e3NUNPpqYy9KT40lELxK+4ph1CesnVkH7x8IMw3ko6bEvZRr1fzaEvYhT2BwSGYeGzikCyUGEfxbiDCAlpEKpPPy292kdGDRDhhf/TgHi4hZ0f06qHI96H7TotIwpKhp5MTX3oqeMXEtX7/Cpq+nJS+HLGzXj4UiwxDPEdjryV45kFHQa2U34tE8jT9hvpNhq/sBcKQxzg4nVDCvYTgWVnHfBlJATokAhAsZ00FWqfCI71AEV3GrAo7F8Zv6m5wWIK23m6rfN+A2LoyVR2oaeOLxFfUr6g/aqGF6r+XUA0KeDsWs7HnYlGeh3GL1I2CHa+7NTZwde3vSicNlnKQYSDWu788vXHNk2IiVNr7LgGsc4iIKXBiEKIYt+bln7bOK8ZEENPprIPPU3SXy9NX8LDDEBRuUQlkjqMHX/a8DP97OuFqv+oFVwoCLG9ME4acHpdIk0AZ4OharD+65b3oB/D4qDR70BBWGIcD/iaJLGdp/vj+ndhCQw9mTL0tNTr/Hecq+7XmK8qD/x/u4rf/oldxeplu4ov/r+H+jcdjDw+jQmnlXq1YeCzj1+jlKZxKG3v9RIlphs79X7BSdJAGHoyZeg5DL0C0ADKY0evKP71mWuKncvWFFuWbSw2/uwni/e+ZW9ZOeCxcMj4XCOwUDsSvzL7ISTxKzRqXGqf55yShiYCzqBOHUhaNENPpgw9SxQHRk41cCBLC8EoDnJJ2f3M1cWuZauLPcevLh5ftbp48ukLh6cDy44qdixbV1z43M3F+1+9q/j0e28r9vxjVVMUp0uicLql6d4gac2TB9u5WD+xvvnclpIwTadS5xl6MmXoGT0qB5I8c7BQEfBnb9hbfPH/qYLUTHA6sHJ18f2nDb5G6WCJG6jF6bn6JaTTHo4IKCx7rA8vr5OyYOjJlKFnPOKWIQQdKhY409EX4WNmpPvesKG476TVxcPPOPFgjRFl87JNB8u6ZTuK9x69pfjkaZuKf1s3E5w2NNQ8He6puWjDVC9LqVEieKRBKy3slGptowaGeYvLwVkfC53KkjQ1DD2ZMvSMz6COsUwnbVbSlGeo0KFCg/ww5+wLB/6YAMGofglpv8v4u1Da3jitKayl7XeofpOUDUNPpgw904ljOGdquOq5HoI41h/2MZ4XEpLqJdox1WuU2oQmxkmDVlpYgHrbqKZpHG5h+rbBkbJj6MmUoScPEYIiMxB8qNyZeNRUtQktvcKapCwZejJl6MkPFSdR0TEVwUeSFsnQkylDT544CxXBh6vbJSknhp5MGXryRS1PBB8aOUtSLgw9mTL05I1mLdHQmQucbNMrKQeGnkwZekQb3wg+ca/CXiVuxbPQLXNoX8xw9inxminfv0iaIIaeTBl6BGp4uF1NnO5abIlQtNCzNhnOqTQDkKRxMvRkytCjQPCpX9FdL3ErnoVumUPNEcO5DU68ph6qIgDNu2GiJA2ZoSdThh4NQoSihZ46wXDuF1QPQJxWW/BRHJI0IIaeTBl6NC5NAYhTZAsFJ0laKkNPpgw96gIun08fl0HD516nvAhFPAyegBTjx6O1uOdQNJqmIbWnzSQ1MfRkytCjriCgpHeL5pQXp8wQQSceir7YEg2tCURMh0BkjZKUL0NPpgw96hqCTnrKq+lB6jSQpnYoanKiTVE0mmY4DanrD1utFx/DIeXJ0JMpQ4+6ivY+EVr4Sy3Q4TZ2JhDxWgJRPLQ9go+P4ZDyY+jJlKFHXUZNTpziGjQfwyHly9CTKUOPckbtT9QmGXykfBh6OuTd7353sWLFiuK4444rzjjjjKpvf+95z3uK5z3veTM772Vl2UyLzRYMPcpd+hgOnz8m5cHQ0xFvf/vbi1NOOaXYvXt3sWfPnmLNmjXFOeecUw1t9id/8ifF8ccfX1x55ZVl986dOw090iIYfKS8GHo64thjjy0+9KEPVV1FccUVV5Q1N4SgJvfee+/Mznp5cdVVV1V9FsfQI80i6MRVYwQfgpCk6WTo6YCHH364DDg333xz1WcW/bZt21Z1zfWJT3yiHH777bcXF1xwQVl6BaQmhh7pkDT48GwwH40hTSdDTwfceuutZYB59NFHqz6z6HfeeedVXXOdf/755XDa/6xfv75405veNLOzPqr4wAc+UI0x10033VS8/OUvP1hWrVpVji9pFsGH+/wQfChc0u7pLmm6GHo6oF/o2cJNSxrQn+EXXnhh1Yed9May3/79+6s+hzz00EPFpz/96YPlwx/+cHHMMcdUQyUFvnKEHoqnu6TpYujpgH6nt7Zv3151zUV/hu/bt6/qUxR33XVX2a/NaS5Pb0m9EXTS0108wkLS5DP0dASnqf7xH/+x6irKBsr9Agz9GZ42ZI7an//+7/+u+vRm6JH649RW+kywdes83SVNOkNPR3BqikvWaXtzxx13FC972cvmXLL+la98pTj11FOL+++/v+pTlMO5tJ1L1a+99tri5JNPLk4//fRqaH+GHqmd9EaG1PoQfuLhpW0QlBjX02TS+Bl6OoQbDZ5wwgllGKnfnPCWW24pXvCCF8w5nQXGY3xe1/YePTD0SO3xZPb0Yahpiae4E2wIQ/xPMKJ/r/FpJL2Y4CRpMAw9mTL0SItH+OHZXZz26hWC6oVaIh50euKJzcMp3hhRGg1DT6YMPdLSEVQ4/bVhw2ywIQzxRHf69XpgKv25QqwenKgdkjRchp5MGXqkbqD2KNoMeVNEabgMPZky9EjdEfcGWrHC01zSMBl6MmXokbqF02MEH06VSRoOQ0+mDD1St3BJe7Tv8fJ2aTgMPZky9EjdQyNoQg9Xc0kaPENPpgw9UvfQnicubScASRosQ0+mDD1SN3FJO6GHuz9zZZekwTH0ZMrQI3VXPPOLuzdLGhxDT6YMPVJ3cZor7t3DHaDrGM4jLHjkRVouu2y2fxQbREtzGXoyZeiRuo2wE6e5CDOEGmp+6KZ/2+IND6VDDD2ZMvRI3Rf37qmXeJ4X9/ShwXMUTovRnxKPuKBfzqgV42o4AiOP+khrxLwRZH4MPZky9EjdR0NmAszatbOhhkbObRs3x31/uMtzzqJheL9CGDIA5cHQkylDjzT94vL3nNv2EBZZB9R4ccqQbkJkvRaN/pp+hp5MGXqk6RdXgfFsr1wRcFgHvdo2pTVB3iJg+hl6MmXokaYfB3oO5tRq5IrTewsFmgiHnObSdDP0ZMrQI00/2qlELUaObVYIOiw7Db/7Yby4RQA1P5pehp5MGXqkPETblRwvXV9MTVe0/fGGkNPN0JMpQ4+Uh7Qhb25i2ds0UqYmLBp+N90QUtPB0JMpQ4+Uh5wvXV9sLVfcEJJ15SXs08nQkylDj5SPaK+S29VJcffqxSx3BKVBXMLO+3ITRI6x8agQGktzCo1gxd8o69cfGoeSU9sigjnraevW2WVnXcQ6GtRnEQw9mTL0SPnI8dL1to2Y6+IS9jZPuY/Hg2zceCi8xIF6EIXpDhs1WiwHd62O9407WLNsg7x7NeEmAiDhL33PfsXQszgzq0x1hh4pH3HahnvW5GIpl+tHSOzVDoog0Cbc0EaI96dw4KYwXwQrAhV/o8SNEyk8XiSmQfgYxqk2jvsEj3R+FyosM8u+GPVA1VR4ZArriPXN8rMuYh0NmqEnU4YeKR8cNOMAkwsOniwvfxeLg23TJez1sEOoYfrUoEV4GdTdr5lWzAPvudTpxikkao/qD60lDKeNtxmXcMayNd29eqHww/bGqap6MCTcMD2my/QHta4Ww9CTKUOPlJd4AGnbRr2TLg7Uh7u8EZriNE8aFAg7aUgYFsJDfG6Uhd6T8eP0UbSL6fVUfqZLWFtMbQrvH1e4Uerhh/cf17pqy9CTKUOPlJc4ZcLfHMSB93BPkXAATw/wFIJUWvMzKnG6jUKYCSwboYN+bU8hEeaWWsPSFH7qNUjjWlcLMfRkytAj5YUDEAcjDo7TjjDAsi62EXMdtURdOYATNNLTXfVTR1Ei2MQpt8MNfW3Uw0+8fxfDTjD0ZMrQI+UnDprDPBB2QRpWlooan66ghiYNGXyetJGJgDMuhB9qo7ocdoKhJ1OGHik/HCA5WHapjcUwRHsc/k4bQhif3zgaAU8DQ0+mDD1SfjhYEgYIP9OMGh6WM5dG22rP0JMpQ4+Un2jrQoPTabbURsyaXoaeTBl6pDxFm5BJaH9xOAbViFnTydCTKUOPlKe4dH0a27tgkI2YNX0MPZky9Eh5ilAwrZeuT3MjZi2doSdThh4pX4QCSpcuxx4UGzGrH0NPh9x+++3FJZdcUlx00UXF9ddfX/VtZ+fOncUuHqzSkqFHytc0X7puI2b1Y+jpiB0zP0sIIevXry/OOeecmS/tsmL79u3V0P54LeNT2jL0SPniZnbsLnjK9jTV9tiIWQsx9HTEmjVris08qa3C/6tWraq6ejsws8c6auanzdqZn26GHkltEA7SRxpMy43ubMSshRh6OuDJJ58sAws1NmHvzF6JfldffXXVp9lZZ51VbNq0qSyGHkltEXTSJ3gnv7kmlo2YtRBDTwfcfffdZWDZs2dP1WcW/Wjf0wvteFay15qxUOjZv39/8aUvfelg2bZtW3HsscdWQyXlKi5hp3BF11LbwhCmaF7YqwyzVslGzFqIoacDCC9NgYV+6SmvFKe1VqxYUb4WC4WeL3zhC8Xzn//8g+Wkk04qT4tJEruRuGkhu4WtW6sBLRBiGJ/2QdGIeKEyrFBiI2YtxNDTAf1qej7ykY9UXXNtmPl5Rgme3pK0FDRo5knZ7EYo1PqsWTMbZvjtVS+9Qg7hiRqXphKn02hHNGg2YlYbhp4OiDY911xzTdWnKPbt21f2u/baa6s+c62e2YMwvKlE7U8/hh5JTaiFiUbObQohh7DE5e9taliiRmnQ7W5sxKw2DD0dcfrpp8/8epr5+VSpX7316KOPFp/5zGeK733ve1WfuazpkTQo1Prw24lCmCCg1EvbkFPHNNlVUUs0yMvlmadhhClNF0NPR1x11VXFcccdV5xxxhnFmWeeWQaY9D49N9xwQ9nvnnvuqfrMZeiRNCni5ojUEA0KNTxM00bM6sfQ0yFf+9rXigsuuKA4//zzi+uuu67qO+uBBx4oLrvssuKJJ56o+szFJe5tTmsFQ4+kcYn2N5RBXc1lI2a1YejJlKFH0jjF6SgaSy9VhCgbMWshhp5MGXokjRPteaLB9FKfAWYjZrVl6MmUoUfSuBF2CCtcwr7YRs3c6LB+6byNmLUQQ0+mDD2SuiAaIC8UWGiyuHHj7Okwxq8XLoUf5t2eNR0MPZky9EjqgvQS9nojZLqpzaEmqB5yuNEh92fl1JaNl9WWoSdThh5JXRF3guZUFTgm8X8acqjJIeQQkgZ5fx/lxdCTKUOPpK6gpiYaNddrdQhE3ntHg2LoyZShR1KXxCXsFGp1tmyxRkeDZ+jJlKFHUtcQdBZxj1Vp0Qw9mTL0SJJyY+jJlKFHkpQbQ0+mDD2SpNwYejJl6JEk5cbQkylDjyQpN4aeTBl6JEm5MfRkytAjScqNoSdThh5JUm4MPZky9EiScmPoyZShR5KUG0NPpgw9kqTcGHoy9bnPfa74sR/7seLnfu7nWpcTTjih+Omf/unGYZZDhXXkelq4uJ7aFdbRz/zMzzQOs8yWn/3Zn3VbalH8zv1cccQRRxTvfe97qyPhdDL0NNi/f3/x0Y9+tLjllltal3/4h38oTj755MZhlkPlT//0T4t169Y1DrMcKr/xG79R/NVf/VXjMMuh8uxnP7vYtm1b4zDLbLn66quLpz/96Y3DLIfKBz7wgWLVqlWNw3Ip//Iv/1Ls3bu3OhJOJ0PPgOzatas49dRTqy71wo7lj//4j6su9fKGN7yhuPjii6su9UIN69e+9rWqS02+9a1vFc94xjOqLvVy5ZVXlj82NN0MPQNi6GnH0NOOoacdQ8/CDD3tGHryYOgZEENPO4aedgw97Rh6FmboacfQkwdDz4AQev75n/+56lIvrCPX08JcT+2wjgw9/RF63JYWRuhxPU0/Q48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9AzA9u3bi3POOac4/fTTi7/8y7+s+k4PblbFrck3b95cliYHDhwo/uIv/qJYs2ZNsWHDhuKOO+6ohhzyT//0T8X69euLN77xjY3T4TW8tt80eB2v7zWNcfriF79YvOtd7yrn/7zzzivuuuuuasghX/nKV4qzzz67eM1rXlO8+93vrvrOxXL99m//dnHmmWeW21bdl7/85fImj/22N6bBTSDZLpumMS4PPPBA8f73v79cR/3WU3ynWIZenzPrj/XI+mS91jGNM844o3j961/fOI3/+Z//Ofh5ve1tb2vc3rrgtttuK+d/586dVZ9D6N/v+5B+p/h+8j2ti2nw3eQ7Wsc0WD9Mg/XFeusK5r2p1PE94fvS6/sQ21u/71TbafTbZjV+hp4luuaaa4ply5aVG/lVV11VfinY0U6T1atXFyeeeGKxcuXKclnrHn300eK0004rl5sdM+vilFNOKfbt21eNMbtz4rUXXXRRccUVVxTHHXfcnB0D0+A1S5nGODHPK1asKD74wQ+WV/IxX0cffXRx4403VmMUxWc+85niyCOPLIexrbCs9W2Fgw/LvWPHjuL8888vl5dxA9vbM5/5zIPTaNrefv/3f7/vNMaJ9cQybtmypZy/d7zjHfO2KeaVfsw747AsLFMq1h3jsi5Yr6zfENOIbYVppNsKB3/uvss00u3t/vvvr8boDr53y5cvLzZt2lT1mcU88x1g+VhOljddRr47sdwsI8vK95TvWjicabDemsLTODBfhDH+piXFPPM9iW2FZeR7FJq2N16TOpxp1LdZdYOhZ4n+6I/+qPiDP/iDqqsobr755nLj50swbdjpsWx17DCPOeaYqmtW7CgD3dyjJ7BzYGcboaZpGjzWoz4NXhfq0xgn1k3dq171qvIXcvizP/uz4rWvfW3VNYtljm2FWhDWbxqUWGfsbAPbWzoNajjS7Y3Lk+vTYPtMp9ElDz/8cHHssceWB9zAvFLrEFgWlollA8ta31ZYJ6zfwDTS7Y1psK1EqOHA1DSN+gFz3AiHa9euLX94pKGHbZ7lSb8PLC/fkcCy8B1Kscx81xDTuOCCC8puNE2jaZtl/XUB80fo6SXCCPvlwPeB71FgW2nah6fbW30arJN+06hvs+oOQ88S/dIv/dK8A15UA0+bXqEnThGk2Bm96EUvKv9/7LHHytfV1xP9uDcG2k6jLp1G13B6Kj14s62wTCmWOQ7Wn/rUp+YtY6zz+HXeNI1XvOIVB7e3ftN45JFHqj7dwrxdfvnl5f/MI93pr2jQj2UD66tpW2HdgHXF+E3bWzxButf29uIXv7jqGj9OK1PDSq1KPfSwzbM8qfic+a6A707T9hbbCtPgmVwEz1CfBuuj3zTGjXljfpjvPXv2VH0PafqcGTe2ldje6tsKr4ntrde2stA06BfTUHfMP4poUfiV2vSF4Y6604bl5ItcR9Vv+qsHl156afGc5zyn/J82Abyu/quHfh/60IfK/5umwY6lPo26dBpdwrriicVpDQbbSvzKDixzVIPzi7v+yzxqbqK9SdM03vKWtxzc3pqmEZ9bl27ix2f7zne+s3yy81lnnXXwIYfMI/Naf+ghyxQ1Eqyv+rbCOmHdoNf2xjQuvPDC8v+m7Y1pPPe5z626xo+gw/co/k9DD9s8y5iqbyt8d+qBhWWOUzdM4+d//ufL/0N9GqyPpm02pjFuLB+nldnnMt+cerv22murobN3Nm8KPbGtxPbWtA+P7a1pGun21msa6Tar7jD0LNHTnva0xi/Mq1/96qpresTBs47TODSSTNG+gl+RuOmmm8rXfe973yu7A/1o1IqmabBDq0+jLp1GV/DLnLYTNGpMsa2kO2SwzLGtsBz1mgbWGctI42U0TeM973lP32nE5xbT6AI+WxrYvvCFLyz+8A//8GDIYR6Z1+9+97tld2CZ4nNmWevbCuuEdYOYRn17S6fRtL0xjdjexo3TWgSdUA89LAfLmIpthe8KWJZ66GGZWXYwjZe85CXl/6FpGk3bbExj3KgRfOKJJ8r/d+/eXe57CT6BbaUp9NS3laZ9eLq91afRtL3Vp5Fub+oOQ88SnXTSSY1fmLe+9a1V1/SIg2cdV89whUyKX5G/8Au/UP5/3333la+7/fbby+5Av/gV2TQNdtj1adSl0+gKfgU3HRTYVtI2GGCZY1v56Ec/WjZ+TrHOWMZ777237G6aBjUf/aYRn1tMo2t+8Rd/8eDBgXlkXr/61a+W3YFlYtnAsta3FdYJ6wYxjfr2xjT6bW9MI7a3cSIAHnXUUcVll11WNoqnEKI5Vcr/YDlYxlRsK3xXwLLUQw/LzLKDadRrtpqm0bTNxjS6Jrb1O++8s+xmW2kKPfVtpWkfnm5v9Wk0bW/1aaTbrLrD0LNEfBnqVZhUe9Z3NtMgdih1LGva+BEc+NPGs7wuvcyTX2X0+/znP192t50Grwv1aXQB89zrs2db4VRU6nnPe97B8WP9pqd2OL1BY9Onnnqq7GYab3/728v/wa9cqvfr04iDFtg+02l0DQdzLvMF88i8nnvuuWU3IvDGQYVlZb2lWK9xYIppxKkhsE7TbaXXNNLtbVy4RJ2anbRw9Rbte/gfLAfLk34f+H7RL7As9dNQfMdiW4lppAfrpmn022a7JvYJcXqO+YzTUIHvQ31badqHp9tbfRp8B/tNo77NqjsMPUtEGwF2JPEl+5u/+ZvyC9CFK4oGLQ6odeyk6R+hJsaLK4rAgS3dAdPINz3ADGIa48a89WvLVd9WWNb6tkIbi/SAUg9Rbba3+jR4+n/aPU7MR8w7+Nzr88v/zHOgO217whVYxx9//MFthemxTqK9DnjN7/3e71Vds939trevf/3r5TTS7a1LCDv1S9ZZHr4DgW0lbTgfVx3FgZdlpZtlD22mwSlI1g+apjFO8fmBBuxvetObyh8BcWozrlDje4Je20q/7W0Q01B3GHoGgB3Fs571rHKjT3ek04KdLctVL6kPf/jDZT9+/cR9ZFJUAf/O7/xOccIJJ5TVwumBO/CaftOInU2/aYxLhLR6iV+DgStBOHVBuwOG17eV66+/vtxZcjqDHS3bVh3T6Le9xTRYPzRmbZrGuMRnzLwxj/zfNH/0Y95jPJYpFQdf1iPrs+lqIqbBATDWRX1bSbfZuH9SVzWFnvg+8F3gO9G0jCwT3yWWkWVlmVP1afAd5buaYhqsn17TGCfmh1qYmDe2h/oVU7Gt8H3he7PQ9sa49e/UYqfRtM2qGww9A8LlktzHIb3x17Tg1AAH9Xqp49JX+tevmkmxk6XdQL2RaeC1/abB63h9v2mMQ7pe6qWO9dlvW+GUFfffaboENwxiGuPCPPVbP4HxWIZoqFrHsrMO0tOBdQttb2222S6gZqVpOdt8H+I7lV6anopp1ANTaqFpjEu6LfXbDmJb6fd9YFi/71TbafTbZjV+hh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPdKE4r4kPIupfn8SHngaz2gaFu6LUr/x4jjxLKQ3v/nN5Twxb5LUxNAjTai4U3Y9fHDQp/8wjeI92vr+979fzst5551XztdiQ0/T3Y4lTSdDjzShOFCvXLmyfBhleqDPLfQwLzyKgIdALjbwwNAj5cPQI00oDtRxwOZZXaEeSJoO6mm/GP/KK688+EywX/3VXy2H/fVf/3X5HCGeA/b+97+/7Id4Dc8kInjx/+tf//p5j3PgPXg2FsP5u2PHjmrIofnfsGFDOZz/m3C6jodgMg6Fmq144GU8yysd1oQnrsd8UHgmF5iH9PWUwGsYj368duvWrdWQQ8vPQyh5bhXPY6ov/5YtW+a851lnnVUNkTQuhh5pQkVoIBRQ28NBGocbeggQ/M8zmF75yleW3Zwyos3Q5z73ueKII44odu/ePec1v/zLv1w+WJFuAsdv/dZvlcPB9AlE0eYoXhPdETjq81ZHWGA6LGd0R2hBvHc/vE8auNKnhDetH8ZlncZ4/KU7phHLwvvyf8xDPIiS8Rmevk/6v6TxMPRIE4oDddSO8H8EgTggh7ahh4cphne+853F0UcfPechljxh+qKLLir/j9d8/OMfL7tx7bXXlv2uuuqqspv/06CB9H35e+KJJ5b/91OfDuGHfswDInD00zQvodf6oaYmxThr164t/4/l/9jHPlZ2Y9u2bWW//fv3Hww9MY+SusHQI00oDsIcnEEQIEBQ2xMH5NDroB796uODWp56kKCb/ojXcIBPnXzyyQdrhxjeVOJ90/nvpWneQM1POv8LhR7WCzU1nG5at25d+ZrQtH4Ytz7flJjfpuXnKdz0i9qwOG1HGOX0nDU90vgZeqQJVQ8N0QaFGg0OtoHaifpBfVCh59Zbby27EQf9yy+/vOzm/34H+vr8N4nwFKfEQv1U00KhJzBuhJGYt6bQQ6iK04VNmpafmjL6ffOb36z6zGJcPgMCV305JI2WoUeaUE2hgdoeAgAH38B4aSiIIDGI0PO3f/u3ZTe4V84znvGM4qtf/WrZTXAgYPTSJvSAZUpDSbx3BAi624aeELVioI1QfT7p12+aTctPDdcpp5xSdc3H+LxO0vgYeqQJ1RQaOJBzcKWECDmcYiG0cMBPg0QcwFNtQw8Nd+lPoTuGg5oUamQ4nUR/CleZxYG/beiJmquNGzeW06DGpB6CFgoo1IDFPDA/zFeEJtZZnPZiODhdSGhjftPX1dcZ89+0/AxPX8twpidpvAw90oTiwNp0CoYDcxoKQACJ/hzseR2vB3/jYB3iYJ1K+6WvueKKK8r/achcR3jgveK908bEvea/CfMc04j5Dk3zX8f7xutpoBxXggXWT4yTSl/H//E63pOQc9dddxUXX3xxce655xaf/exny2FgvPS1LGf9PSWNnqFHkhYpQo+kyeK3VpIWydAjTSa/tZK0SIQeiqTJYuiRJElZMPRIkqQsGHokSVIWDD2SJCkLhh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPZIkKQuGHkmSlAVDjyRJyoKhR5IkZcHQI0mSsmDokSRJWTD0SJKkLBh6JElSFgw9kiQpC4YeSZKUBUOPJEnKgqFHkiRlwdAjSZKyYOiRJElZMPRIkqQsGHokSVIWDD2SJCkDRfH/A6MbJSagde/sAAAAAElFTkSuQmCC">
La loss represente des "sauts" à cause de la reprise de l'entrainement à deux reprises. Cela induit une modification du learning rate et explique la forme de la courbe.
## Résultats
Les questions générées sont évaluées sur les métrique BLEU et ROUGE. Ce sont des métriques approximative pour la génération de texte.
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAApcAAAGFCAYAAAComticAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAF4vSURBVHhe7d0JmBTVvffxvO/z3PfebGq84d7EJSEBlwQl4hIWNe67EEVJVNAQNSioUaLBNQIuERQxgqgoiiBCQAERQRYVFFllEUQEhn3f91XU/9u/U1VQ01M93TA91U3398NzHqZOVVdXdfdM/fqcqlPfMQAAACBLCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXKKgjRo1yvr37+9PZUebNm1cyWf9+vU7KLazIqZPn25dunSp9P3M9euoz7CeX/8f7AppXwCkRrhEbL7zne+kLJV18P7Tn/5ktWrV8qf2T6pQcfbZZ7uSr5o1a2ZVqlRJu52aF34Pvve977nX6kD3OXl94RJep6ZTrUvLaX46kydPdsvVqFHDrStqm7Mlefsri54j6nkUxLQNhRIuC2VfAKRGuERsdFAJgkBUqQwVCZepQkVlbm9FrV271m33M88849ekpvci/H7cd9991qRJE/vhD39od911l7+UJ1i2PMnrSy6B4HMQRctpfjqtWrWyqlWr+lOVK9XnINtS7XuhhUvtJ+ESKGyES8SmvFBxoDZt2mQzZsxwRT8nq4xwmcqOHTts5syZKbcl2bRp02zhwoX+VGbSPcf+BJEgDCZTXXJwS7VsWCbLSHmfg0zDZXnPle4zkUzvQ3nCn4ONGzfahAkTbP369W46it7T8ePH28qVK/2a1Hbt2rX3vco0XC5btsydEqDPQiqrVq2ySZMm2fz58/2a7ArWP2fOHNu5c6dfm1omn0cAhYNwidiUFyoCderUsYsuusif2keBSo9/8803/RpzrXOqC5fkFrvkcBkcqJMlH9jD6wxKcICMCjavvPKKHX300aWWT94WPYce17t3b6tevfre5e644w5/ifLp+ct7Dq07PE8lCEVRovZDWrRo4bqbw1ItG5bJMqLtSrVcqoAVCN6/5BLI5DMRbGf4fSjvdQrmN2rUqNR6dV5rmF43nY4QXqZp06ZlvkCoPljff/3Xf+2dDj8uKBLss/4Pb4O2fejQoW6ZgELfNddcs3cZlQsvvLDUNmjfw/PDJRO33nprqcdUq1bN3nvvPX+uR/XaJ5XDDz/cTWv7w/sCoHBl9tcEyAIdVNKFjyeeeMItt3nzZr/Gc9NNN9nPf/5zf8ps8ODBbjkdbEtKSlwJDryaFzjQcBksp/rgoBgcEINwEojalptvvrnMtmhdepwO/joY64Cv8KHlkoNKMoWGdM8RbKPqkrc5SvJ+qDWuV69e7vEtW7b0az3Jy0bJZBnR+lMtl/w+JAv2KXiu8D5m+pkIHlu7dm3r27evrVmzptzXKVjnX/7yl1LrPeKII2z58uX+UuZOKRg4cKB7X8Ov5d///nd/CU+wvhtvvNE+/fTTvfsQ7HswHWyT/lf9sccea+3atXMtot27d3ety8mvY/369V2dLnTSfulzVbduXbvgggv8Jfa9huFyxRVXuMemE2yj/tc+qhVXX0SSXwsto/VdffXVNm7cOPflMHiuYB8BFC7CJWKjg0qqEhxsli5d6qZ18Ar7j//4D7v33nv9Ke8gquVmzZrl15j7WXXhg+SBhkuJ2g4JwkkgaltEB//wtgTP8dZbb/k15gKA6u68806/JtpVV12V0XPsz8Fb+6Blw0V1UY9N3ucoUesLSnidmk61rqj3IUrU9mT6mQi2M9NRBLRs8mkC6kpXfdTnI0zzTzvtNH/Ko8fVrFnTn9on1b4H7+kll1zi13j0BUD1QagbOXKkm37sscfcdGDQoEGuXv9H0fm1J598ctpufHXhK0Qmb/urr77q1h9+LTSt31l9vsP25/MJ4OBFuERsdFDRgV0HlqgSUDfeMccc40+ZdevWzT02fG7cL3/5SzvllFP8qX1Up3mBOMJlqm259NJLS22L1hWeDiSvL0qmzxHsX/j1TCV43uD1f+edd6x169Z2zjnn2O9+9zvbsmWLv2Rm25i8vuQS0PalWleqgJUsantSvUbJnwk9TlfGZ0rbo9c52WGHHeZaj8PUUnj77bfbxRdfvHcbv//97/tzPVrf9ddf70/tk2rf9dqpXq2VYUF98Nrq90QXY2k9yeV//ud/ypweIKr7wQ9+YFOnTvVrvJCaXGTevHnu+W677TY3HQi+IN1www1+jbeP5557rj+1T/I2AyhMhEvERgeV5EAQpUePHm7ZESNGuGl16SWHBs3XQTOZ6jQvEEe4zHRbNB21/8nri5Lpc+zPwTvV8959991l1pHJNmayjGjdqZbTVeDh/Ukl6rn0uExeo0y3M5BqvcnrUTf3z372M/f6KQjq9QtOewjLdDsDqd7T5Ho9XudhBtuVXJKfs0+fPu7xyedtdujQwdUH5cwzz3T1qbZDgucIaLmofSxvHQAKR/q/4kCW6KASPgCVR11qOsdt7ty57nEvvfSSP8ej87xStVKFL0ZJFS6Tu+uCc/PCNB11gEw+kKbaFnXFhrdF64ra/+T1Rcn0Ofbn4J3qeXWhS/K+Z7KNmSwjQQCKcuWVV7r56UQ9V6rXKPkzkel2BvRaaLuSqeWyefPm7uclS5a45Tp37uymA7pYS/Vhya9tQHXJy0qq9zS5/o033nDTyWExis751bIvvPCCX7PPnj17yhRZvHixe4zCc5iuXld98FqIpqP2MdW+ACgshEvERgeVTA/q6mLTwfvxxx93jwt30YoGClf9xIkT/RpzP6tO8wKpwqW6EMMUPlQfpmldGZssOZxEbYvo/LTwtlQkXGb6HPtz8E71vLp4SusIX2SUyTZmsow0aNDArT98AYjoQhh1IUd1QSeLeq6o1yjqM5Hpdgb0+F/84hf+lCc457Jr165uWkP+aFpjhYb99re/dfVhmo4KXkGLvYb3CUv1nibXBwPLpxp9IFivhknS71byWKaZUFDXhUVhzz33nHve4LUQTRMugeJFuERsdFDRQV0HnagSNmXKFLf8SSed5M7BTBa+Q4taX1SCgKh5geRwKQpkWk7PqQOi1q+fVRcWXI2r8+o0PzggJoeTqG05/fTTy2yL1hEVapLXl0omz7E/B+/gebVdKmqR0rTO29Nrtn37dn/Jsssml0yXCWgbdWFI+/bt3bbqf02rXlcWpxM8V1imn4mox5ZHj9f2a4gsXQGuUq9evTKtpLroSut9+eWX3edKF+Bce+217vFhwfqS6cprzdOV2+HXLNV7GlX/1FNPuToFTHXN66IlvbbarmC5448/3m1b8Bzhko72S+tXS7++fATPl/xaqC5qfan2BUBhIVwiNsFBPapEHYhUf9ZZZ9nbb7/t15Q2ZswYF7B+/OMfu6KfVRemcKmLU5Lp4HjIIYe459BBUs+vn8PUvah6dT1rXnBA1M/Jy3755Zeu6/R///d/U25L1HNI1PpS0XaX9xzaRq0rk4N38LzhovMGg2FswqKWDUrw3kXNC5dk2hddaKOwof8VzjTkUiZSrTOTz0Sqx6aiZbWPKhq+SBfAKFwmh2B9joLTK4LHBO9HWDAvigKhxsvUMsHjUr2nqeq1HQp7P/rRj/beBlQtqsFywbqjSiY0pqu+fOiiKI27GvW+aV1R+5hqmwEUFsIlAAAAsoZwCQAAgKwhXAIAACBrch4uNa6dTmDX1Ys6VykTOtfpjDPOcBce6MrFVOcvAQAAIF45D5fBid86wTvTcKnbqenEeQ1loissFTLDw2AAAAAgN/KmWzzTcDl9+nS3XPhWgBp2I3koDAAAAMTvoAuXGmbju9/9rj/lCR6bPCgzAAAA4nXQhcuOHTu6wZbDgseGB0oOjB071urWrbu3nHDCCS6chusoFAqFQinkonF9dTclIA4FFS6nTp3q1+yzevVqGzJkyN7y7LPPul+ycB2FQqFQKIVc1LDy2muv+UdGoHIddOGyvG7xlStX+jWp6U4qusMJAADFQncqS3W3MyDbDrpwqXCo5cK3XtPV5ple0EO4BAAUG8Il4pTzcKlQGRSFxuDnwMSJE61GjRqlLtapU6eOG4po4cKF7pxK3es306GICJcAgGJDuEScch4u1eqosS6TS2DSpEl24okn2ooVK/wabxD1evXquVBZrVq1/RpEnXAJACg2hEvEKW+6xeNCuAQAFJtch8vt27dTKqns3r3bf5XzB+ESAIACl4tw+c0337hT2ubMmWOzZs2iVGLRaYLr16/3X/ncI1wCAFDgchEuN2zY4I65Cj1fffWV7dmzh1IJZdu2be7UwZKSEv+Vzz3CJQAABS4X4XLx4sXcOS8mu3btci2Y6ibPB4RLAAAKXC7C5dy5c/Oqq7bQ6fQDtRbnA8IlAAAFjnBZ+AiXOUS4BAAUG8JlesnjbEfR8Ihr1qzxp/IL4TKHCJcAgGJDuExPwTI8znaUY4891vr06eNPVa5BgwbZAw88YJdddlna7RLCZQ4RLgEAxYZwmV6+hUvdIKZVq1ZuuzK5PbbC5caNG/2p3CJcAgBQ4AiX6QXhUncBfPnll+3JJ5+0YcOG+XM9UeFy+PDh1qlTJ+vSpYtNnjzZr/UoIIa72vXz/txVUAiXBwHCJQCg2BAu0wvCZY0aNdz/Z511lgt16p4OJIfLxo0bW9WqVa1p06bWqFEjt7yCZkDTyeEyk6AYRrg8CBAuAQDFJl/C5ZVdxsZarn5hnP/M6SnEKViGw2GHDh3s9NNP96dKh0u1VCaHPrVKHnHEEf4U4bJoEC4BAMUmX8LlqY+NtJ/f+25sZX/DpUJceJsnTJjg6ubPn++mw+FSLZWtW7feWxQsVcJBUD8TLosA4RIAUGzyJVzOWLbJpizeEFuZs2qL/8zpKcRVr17dn/Jo+xXsgnMpw+Gyfv361rZt270lCJcqAcJlkSBcAgCKDedcpheEuKFDh/o1Zr1793Z1wRA/4XB577332kknneR+TkXnY2odgSeeeIJwWYgIlwCAYkO4TE8hThfynHnmmfbhhx/undZwQIFwuFy6dKkLfc8884xNmzbN1UnXrl39n8yuv/5615Kpe6xrfZdffnnG4VLLB0WPCU9HIVzmEOESAFBsCJfpKbQpTCoMVqtWzQ455BB3XmWYwmX46vFly5ZZw4YNXa5QAFRp1qyZP9dcd7qmVV+3bt29QTET2o5gneFCuMxDhEsAQLEhXBY+wmUOES4BAMWGcFn4CJc5RLgEABQbwmX+qVmzpv3yl7+MLNu2bfOXyhzhMocIlwCAYkO4zD+7du2ynTt3RpYDQbjMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTLwke4zCHCJQCg2BAuCx/hMocIlwCAYkO4TK+8WysGdFtI3ZUnHxEuc4hwCQAoNoTL9ILbP5YnfG/xyqZbTwa3fNT9zjt27OjPiUa4zCHCJQCg2BAu08u3cHnGGWfYwIEDbfPmzTZ69Gj7/e9/X+Ze52GEyxwiXAIAig3hMr0gXC5evNhatWrlgtzgwYP9uZ6ocDlkyBC7+eab3fK9evXyaz1t2rQp1dWun1V3ILp06WKHHXaYP1UW4TKHCJcAgGJDuEwvCJePPfaYC2kPP/yw65J+5513/CXKhsvGjRtb1apVrWnTpnu7sTt37uzPTYSsxHRyuFTdgXjqqaesXr16/lRZhMscIlwCAIpNLsJlSUlJ2XD53G/jLa9c5D9xegp+NWrUKNX62LVrVzv99NP9qdLhUi2JyUFRrZJHH320P5W9cLl27Vr3uPK65AmXOUS4BAAUm7wJl08dY9b6kPjKfobL//f//p+753fg888/d6Fu/vz5bjocLtVS2bp1671FwVIlHB6zFS71mPLOtxTCZQ4RLgEAxSZvwuW6ErM1s+MrGxf7T5yegl/16tX9KY+2X8Fu8uTJbjocLuvXr29t27bdW4JwqRLIRrhUa2q6YCkKlxs2bPCncotwCQBAgcubcJnHguA3dOhQv8asd+/eri4IbeFwee+997rpFStWuOkoOh9T6wg88sgj+xUua9eunVGwFMJlDhEuAQDFhnCZnsKlLujRmJIaLD2Y1pXjgXC4XLp0qQuKLVq0cMtqyCBdXd6kSRM3X66//nrXkrl8+XLr3r27HXPMMRmHywYNGrhgqXWHSyqEyxwiXAIAig3hMj0FN4VJhcFq1arZIYccUqbVUF3U4ZZN3a2nYcOGLlcoNKo0a9bMn2uuO13Tqq9bt+7e58iElosqqRAuc4hwCQAoNrkIlwfbUEQHO8JlDhEuAQDFhnBZ+AiXOUS4BAAUG8Jl/qlZs6b9+Mc/jizbtm3zl8oc4TKHCJcAgGJDuCx8hMscIlwCAIoN4bLwES5ziHAJACg2hMvCR7jMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTL9NLdXlEGDRpkCxcu9KfyC+EyhwiXAIBiQ7hML5NbM4bvLV7ZtC3BLSX1vNdee62NGzfOn1uWwuXGjRv9qdzKi3B5zz33WNWqVe2II44ocx/PKLrv50UXXeTu+6n7fN50003+nPQIlwCAYkO4TC/fwmXXrl39n/ZtW3kZiXAZ0qJFCxcQdXN33eReL174pu9RqlSp4gLm8uXLbfDgwS7VazoThEsAQLEhXKYXBLh+/frZ9ddfb5deemmZbBEVLrVMw4YNXfDTY8M0L9zVrp8zzSvJ9FjlnVQIlyEKip07d/anzHr16uVePIXNKFEvrt6o8l7wMMIlAKDYEC7TC8Jl0GDVunVr93M4oySHS80/6aST7JlnnrFWrVrtfWxA08nhUnX7a968eXbNNdfY6aef7teURbj0rVmzxr3IEyZM8Gs8quvdu7c/VVbt2rWtY8eO7me9mGr5vOOOO9x0OoRLAECxyZdweW6/c2MtN7x3g//M6QXBr0ePHn6N2QsvvGBHH3207dy5002Hw2VUw9YDDzxQqk4/VyRcBs+hUwe7d+/u10YjXPqmTJniXrTkD5/qOnTo4E+VtWDBAmvcuLGdcMIJ9v3vf9/at2/vzynrk08+sdNOO21vOfHEE+2www7z5wIAUPjyJVye3fdsO+G1E2Ir+xsu/+M//mNvkJRp06a5TKLgJuFwqQtsFP6Sy3/+53+6+aLHViRcBtQIp3XXrVvXrymLcOkL3rSocNmpUyd/qjS9cGeccYY7V3PgwIH29NNPu2ZsvehR1q5da8OHD99bunXr5rriAQAoFvkSLtfuWGurt6+OrWzYmfnQPAp+1atX96c82n5lkuBUvXC4rF+/vrVt23ZvUTd6UALZCpcSPHbp0qV+TWmES59eBL1QUd3iAwYM8KdKU6BUOFSXeuC+++7L+M2iWxwAUGw45zK9ILwNHTrUrzF3ip7qgvEjw+Hy3nvvddMrVqxw01GSu7Pvv//+jPNKMl0spMfOnj3brymNcBmi8yXVmhjQ1d+HHnqoLV682K8p7ZVXXnFDFoWp1bK8FzyMcAkAKDaEy/QULtUTeuaZZ9qHH364d1oX6gTC4VItiMoe6knVsps3b3YZpkmTJm6+6KpzZRSNbqOsU6tWrYzDpcKr1quidfzyl79025MK4TKkZcuWLmBqYNCZM2davXr1rHnz5v5c7zyD4447zr0xwbTemOeff961Xo4ePdouv/zycl/wMMIlAKDYEC7TC8Kkgly1atXcWNrJ40oqo4wcOdKfMlu2bJkbhki54n/+53/snHPOsb/+9a/+XHPd6RpeUblF50sGz5GJW265xU499VT3WOUgrae8RjTCZRJ1ax911FHuzUl+Iz/99FOX9MPNzq+99ppddtll7o3XByDdCx5GuAQAFBvCZeEjXOYQ4RIAUGwIl/ln27Zt5Zb9RbjMIcIlAKDYEC7zT82aNd0YmlGFcHmQIVwCAIoN4bLwES5ziHAJACg2hMvCR7jMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTLwke4zCHCJQCg2BAu0wtus1ieXr16uf3KR4TLHCJcAgCKDeEyvUxuzRi+t3icdOtI3QZS25gK4TKHCJcAgGJDuEwvX8Pl008/bWeeeSbhMp8RLgEAxYZwmV4QLvv162fXX3+9XXrppWW6yaPCpZZp2LChNWrUyD02TPPCgVA/p+t6D5s5c6YLlR9++CHhMp8RLgEAxYZwmV4QLhXiFABbt27tfu7cubO/RNlwqfmNGze27t2720svvWTnnHNOqfCYHAj1s+oy1aBBA3vggQfcz8nrSka4zCHCJQCg2ORLuJx7+hk2u9bJsZVF1zX2nzm9IPjpop1A165d3b2+d+zY4abD4VIh8kc/+pH7OfDYY4+VCo8VCZddunSxU089de9zEy7zGOESAFBschEuFXaiwuWs446PrexvuDz88MP9KU9JSYkLdeqelnC4vO666+zee+915f7777eHHnrIHn74YTvssMPcfDnQcDlv3jz7yU9+YoMGDfJrCJd5jXAJACg2+RIuv9mxw77Zvj228u3Onf4zp6fgdsQRR/hTnoULF7pQN2PGDDcdDpf169e3Zs2a2ejRo0uVcAA80HCpVlF1saubPih6XPBzFMJlDhEuAQDFJl/CZT4Lgt/w4cP9GrOePXu6uk2bNrnpcLhUi2XVqlVt0aJFbjrKSSed5LrWAy1btsw4XCYXPS74OQrhMocIlwCAYkO4TE/hUq2CZ511lvs5mO7QoYO/ROlwuXTpUhf4WrRo4ZZVAFU3dpMmTdx8ufXWW6127druPM727du7rvRMwmUUPU7PkwrhMocIlwCAYkO4TC8IkyNHjnRXaR9yyCEuDIZdcsklrus7sGzZMjcMkXKFzpFUV/ngwYP9uZ6g1VFd6MFzHAg9jnCZpwiXAIBiQ7gsfITLHCJcAgCKTS7C5cE2zmXcFAT1+kSVb7/91l8qc4TLHCJcAgCKDeEy/9SsWdNq1KgRWbZv3+4vlTnCZQ4RLgEAxYZwWfgIlzlEuAQAFBvCZeEjXOYQ4RIAUGwIl4VP4XLDhg3+VG4RLgEAKHC5CJcrVqxwd7hB5duyZYvNmjXLdu/e7dfkFuESAIACl4twuXXrVhd4FixYYGvXrrV169ZRKqEsX77cZs+eXe6dguJGuAQAoMDlIlzKtm3bbOXKlTZv3rzIMn/+/FhK1HNno0Q9VyYlal0HWhYvXuxOP/j666/9Vz33CJcAABS4XIVLFCfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOhEsAAAoc4RJxIlwCAFDgCJeIE+ESAIACR7hEnAiXAAAUOMIl4kS4BACgwBEuESfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOhEsAAAoc4RJxIlwCAFDgCJeIE+ESAIACR7hEnAiXAAAUOMIl4kS4BACgwBEuESfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOeREu77nnHqtataodccQR1qhRI7+2fPfdd58dd9xx9p3vfMeVNm3a+HPKR7gEABQbwiXilPNw2aJFC6tRo4ZNnjzZSkpK7Oyzz7ZmzZr5c6Np/pFHHmn9+/d306NGjSJcAgCQAuESccp5uKxSpYp17tzZnzLr1auXa4lU2Iyies0fPHiwX7N/CJcAgGJDuEScchou16xZ44LihAkT/BqP6nr37u1PldavXz83f/r06a7VU93oqssU4RIAUGwIl4hTTsPllClTXFBcv369X+NRXYcOHfyp0p555hk3X+dnNm3a1Bo3bmyHH354ym7xMWPGWK1atfaWX/3qV3bYYYf5cwEAKHyES8Qpp+Fy2rRpKcNlp06d/KnSVK/5L7zwgl9jdtddd7m6DRs2+DX7rFu3zj744IO9pXv37q4rHgCAYkG4RJxyGi43btzoQmFUt/iAAQP8qdJUr/nLly/3a8zmzJnj6lKdpxlGtzgAoNgQLhGnnF/QoyvFu3Xr5k+Zu1Dn0EMPtcWLF/s1pale88MX9ARd5UuWLPFrUiNcAgCKDeESccp5uGzZsqULmOPGjbOZM2davXr1rHnz5v5cs/Hjx1u1atVs2bJlfo25+RqySEMQDR061KpXr27169f355aPcAkAKDaES8Qp5+FSNCD6UUcd5UJf8iDq6uo+7bTTbOXKlX6NR8tpeT0u0zEuhXAJACg2hEvEKS/CZZwIlwCAYkO4RJwIlwAAFDjCJeJEuAQAoMARLhEnwiUAAAWOcIk4ES4BAChwhEvEKSvhUnfG0VXdGhoo3xEuAQDFhnCJOFU4XDZr1swNYK7AFoRLjV3ZtWtX93O+IVwCAIoN4RJxqlC4fPnll+3aa6+1vn372sMPP2yjR4929WPHjnWDoecjwiUAoNgQLhGnCoVLDWTevXt39/ODDz64N1xu3brVfvCDH7if8w3hEgBQbAiXiFOFwmXDhg1d66XoLjtBuHzvvffcLRvzEeESAFBsCJeIU4XCpW67eOmll9qUKVOsVatWLlwOHDjQrrnmGjedjwiXAIBiQ7hEnCp8Qc+NN97oLug577zz7KSTTnI/V61a1Z+bfwiXAIBiQ7hEnCocLmXEiBHu6vAOHTrY4MGD/dr8RLgEABQbwiXiVKFwefbZZ7uu8YMJ4RIAUGwIl4hThcJlixYtCJcAAOQ5wiXiVKFw+cUXX1iNGjWsW7dutmDBAr82vxEuAQDFhnCJOFUoXKrVUhfwRBV1mecjwiUAoNgQLhGnCoVL3e6xvJKPCJcAgGJDuEScKhQuD0aESwBAsSFcIk4VDpczZ8503eO6W49uB6k79eRrq6UQLgEAxYZwiThVKFx+8skn7vzKY4891ho3bmzNmjWzU045xdUFt4XMN4RLAECxIVwiThUKl3feeWfkhTtqyaxZs6Y/lV8IlwCAYkO4RJwqFC4VLFN1gav1Mh8RLgEAxYZwiTjRcgkAQIEjXCJOnHMJAECBI1wiThXuu1bAbNCggWvBDEqfPn38ufmHcAkAKDaES8QpP0+MrESESwBAsSFcIk4VCpcDBw5051cmU11UfT4gXAIAig3hEnGqULi8++677amnnvKn9unXr5879zIfES4BAMWGcIk4VShcMhQRAAD5j3CJOFUoAd56663WokULf2qfjh07MhQRAAB5gnCJOFUoXE6YMMGqVKlit99+u2vBnDZtmj3wwAN21FFHMRQRAAB5gnCJOFW471qtlAqY6gYPSqNGjfy5+YdwCQAoNoRLxCkrJ0bu2LHDZs6caTNmzLBNmzb5tfmJcAkAKDaES8Qpq1fdLF261GbPnu1P5SfCJQCg2BAuEacDCpe6SrxLly7+lKdu3bp7u8WrV69uQ4YM8efkF8IlAKDYEC4RpwMKlz/84Q9tzZo1/pTZ4MGD7Re/+IV169bNdY+ff/751rRpU39ufiFcAgCKDeEScdrvcDlnzhzXOhlWv359u/nmm/0ps1deecWOO+44fyq/EC4BAMWGcIk47Xe4nD59uguXq1atctMlJSVuunfv3m5aNCxRcgDNF4RLAECxIVwiTvudAHVl+BFHHGE9evRw0506dbJatWq5nwMKl9z+EQCA/EC4RJwOqHmxTZs2rmVSF/bo/z59+vhzPJofdeeefEC4BAAUG8Il4nTAfdfDhw93IXLZsmV+zT6q79evnz+VXwiXAIBiQ7hEnPLzxMhKRLgEABQbwiXiRLgEAKDAES4RJ8IlAAAFjnCJOBEuAQAocIRLxIlwCQBAgSNcIk6ESwAAChzhEnHKi3B5zz33WNWqVd3g7I0aNfJr01u4cKEbZ3N/7gZEuAQAFBvCJeKU83CpwdZr1KhhkydPdreS1MDszZo18+eWT0E0GNA9U4RLAECxIVwiTjkPl1WqVLHOnTv7U2a9evVyYVFhszzdu3e3c845h3AJAEAahEvEKafhcs2aNS4YTpgwwa/xqK53797+VFkrV660n/zkJzZlyhTCJQAAaRAuEaechkuFQwXD9evX+zUe1XXo0MGfKuvPf/6z3Xbbbe7ndOHy448/thNPPHFvOfbYY+2www7z5wIAUPgIl4hTTsPltGnTUobLTp06+VOl6Z7lRx55pG3fvt1NpwuXWvdHH320t/Ts2dN1xQMAUCwIl4hTTsPlxo0bXTCM6hYfMGCAP1WaLv5RoAwXLa//R40a5S+VGt3iAIBiQ7hEnHIaLkVhsVu3bv6U2eDBg+3QQw+1xYsX+zWlJQdLFcIlgGQ7vvra5q/ZZnNXb/VrgOzatOMrW75xh81ZtcWmLt5on5SstaGfr7Q3Jy+17mMXWucPSuxf78+15z4ssRdGz7OXP55vr36y0HqOW2RvTFxsfT9dYv2nLLNBny23d2essGEzV9rIWats1Ow19vHctbZ0ww7/mSqOcIk45TxctmzZ0gXMcePG2cyZM61evXrWvHlzf665+p///Oe2bNkyv6a0IFxminAJHPx0QNfBXAdyHcTbD5ttLftOs+tenmDndhhtv354mP383nf3loue+cgeHjTTHcDXbd3trwWBXXu+sfXbdtvi9dtt1orNNmnhevtw9mobPH259Zm0xIUihaTH3p1l9/efYXf0nmp/7j7JrnlpvPv/9sT0fYn6RxPztZyW750IT+8kQpPWM3HBevti+WZbtG67ex49Xy6E91Pbk24/tV/avz90HW+XPPux/e7JD+3kR0fYcQ+9V+rzVVlFoTRbCJeIU87Dpdx333121FFHudCXPIi6LvpR4NQV4lEULjU2ZqYIl8DBYduuPTZ23jrrkjjA/qXnZLs0cXDXgT3qILy/5eynRtnf+n3mWo/mrSmcls0N23fbwnXb7LMlG+3juWtci1iPcYus0wcl9sjgL9w+3/Tap9boxXF2/tOj7dTHRka+PnGUq54f60JbXEX7HLUdFSn6ElP78ffdF5oGnT+JfN6KFIXzbCFcIk55ES7jRLhEPti++2tbs2WXLVi7zWYs22QT5q9zXWHvfb7SBk5dZv+etMReG7vQXvzIa0lp996X1vadL+z+ATPsb30/sxa9ptiNr01yLXVRB6VMyh8T5bY3prpWP3XTjZ6zxmav2mI7v/ra38r4qEXp00UbrNuYBfbXPlPtvETwiTqYB+WUR0da/c5j3Gug1+TZxGuk1+yjxD58uWKza50KUxe5wtZTw2dHhoxaj4yw5onXVK2geny+0f6oRVH7pxY27a9a1rT/eh30eiTv0/6Wmm2GW712H9iFHT+yq18YZ026TXSvyT1vTrc278y0p0fMsWdGpi8dE8tpeT1Oj7/+lYlufVqv1v+btsMjnz+uoucP76e2L7yf2n51Yb8+fpENSPwujvhilY1P/H5OX7rJnWaxOvF7u233Hv+dOXgQLhEnwiXylro+1W2lP+wHS3lm5Fz7x9ufuwDYrOdka5wIfw2e+8SFpTr/fN9OaF26uzZfy4mJoKHQcnOPT631oJn20sfzbciMFfbZ0o1Z6VbWgVqBttVb012LZNQ2qFz0r4/dQV+hT93gKzZl7xw0ddWqRa9Jtwllujm1/2rh0zlwUe9zZRWds6cvEfryoJa90xMhKLxd6Yo+X3rMZZ3GuM+e1qPw/WTiC4S+qCiAa5/0ZUYhWq+nvujkkoLa5h1fuQCt4KZtUre1vnjpfFmFan0B0/sf/D3Q/5pWvebrnEctr8fp8VqP1qf1HoxBsDIQLhEnwiVySgcEtZgpPKjbrumrk1wXU9SBs5DKsQ8Oda1lZ7b/0C5OBCi1pqmlSGFO53kpUD048HN3DluH4bPdhQE6H0ytKf0+XeK6y9Siota45ICSaVGQ0brUMnp3v89cK6jOKYva3uRyeSK8RLWGpisK2lHrU9FzKwx1TYQgBb+4W1DVlawAptbAGknnbOa6/Pofw+ycDqPs2pfG213/nuZasnVhiAL/lMUbbFkWL/xAYSJcIk6ES8RCF17owK2uPIUYdUtFHUTDRd1XCiNRIaWyiw7i6i7Tyfw630/dZgp9umhEwU/78dDbn7tuNJ38327oly4EqiXslU8WuNCmA78C3ORFG1x3swLAxu1f+a9Iflu1eadrGVKIVRehgq6C/wUdP3JBJ+r92p/y28ffdy2DCs0fz11rW3bmX+vS58s2uW5eBbnHh8xyLYpqldZ7//fEZ0CfBV3YokCsz4hCqT4zwakK+l/Tqg8+Q1pej9Pjg8+Q1us+Q4nn0Wfo+cTrrSuIx81b584HzXXLIgoD4RJxIlwiLZ0Lp/OQdLD8fSLsqatSF0Som/ektiPs+ApcOamT4XUgVmDThRu6alNdr1t30ZV1MFDwCbo0dQ6puiSXRHRpTlvidWmqO5artYH4ES4RJ8IlytDYar0mLLJbXp+8361UunpS3b11n/jAdeNp+I4ru4y1G16Z6FppdMHG+7NWu3OkAADxIFwiToRLuCtpNXCvunijrtLVBQIaw05dpDq/S+PD6apJdfOqFUpDxgAA8hfhEnEiXBahb7/1zidTN7SGo6n2wJBSYfK0x0a64WB09whdsQ0AOLgRLhEnwmUR0LAcGkPx+VHz3Dh/yePMafgSXXCgq0/prgaAwkO4RJwIlwVELZK6kGLw9BXuylOd5xg1uLLOo/zTqxPdkC9qwfxGDwQAFCzCJeJEuDxIffX1N+4qXA2KrOFRGj4/NvLim+oPDHWDYWsoGQ2PoyFxCJMAUFwIl4gT4fIgo2FfdLcUjROYHCRVdL9g3T9YXdwaFBoAAMIl4kS4PEjojioagPmYB4fuDZK6u4vuDa2wqTua6KpvAACSES4RJ8JlHtPA1LqrjQYsDwKlur41LJAGGgcAIBOES8SJcJlndDrkmJK17pZy4SGCdN5k74mLbdtuxpQEAOwfwiXiRLjMExqMXONOqqs7CJQ1Hh7mLsT5ciXDAwEADhzhEnEiXOaQrtrW+JPNek62avfva6XU7RJ1ZfdOzqEEAGQB4RJxIlzm0K2vTy7TSjl39VZ/LgAA2UG4RJwIlzl03EPv2a8TobL/lGV+DQAA2Ue4RJwIlzmiMSjVYnl5pzF+DQAAlYNwiTgRLnPklU8WuHD58KCZfg0AAJWDcIk4ES5zRIOfK1y+PW25XwMAQOUgXCJOhMscqfPPD1y4XLJ+u18DADjobV9vtnGx2aqZZovHmy36xGzJBLNlk81WfObVr/nSbF2J2YaFZpuWmW1ZabZtrdmOjWa7tpp9tcNfWfYQLhEnwmUOrNy80wXLWo+M8GsAHLCdm8zWzvUO5F8MMpvUzWx0O7N3/2bW93qz1y4zG/p3b54O4Dhwu7d5QUjBaPlUs4VjzJZOSvxR+9yr27TUe421XL7atcVs62ovAK6ZnQh8073wt+Ajsznvmc0caPZZb7PJ3c3GP282pqPZh4+bDbvfbNDtZv3+ZNbrKrNXLzF7vq7Zv040a/8Ls9aHZL989JS/0RVHuEScCJc5MGTGChcuNb4lgHJsWGQ27wOzsZ3Mhj9oNqCZ2etXmr14htnTx0cfkNOV504ze+cOs2m9vEBULHZu9gKVWs/mj/ZC1KeveuFp5MNeGNfr2+c6sx71zV4626zzqd7r/M8jo1/LTIoe+2Q1s2dqeOvTe9ftAi/09/y9F9R6X2P27yZmbzY163+z2cBbE0Eu8R4Nvsv7YqBgNyKxje+39YLeiH949VpGy+uxWo/W+fJ5Zi/UM+t0slnHX3nB7/GfRG9bvpcxz/hvXsURLhEnwmUOPDL4Cxcudd9wFCB1a21bkziQLzFbO8ds5QyzJRPNFnxsNmeY2ReJP/Cf9fFaRia84B1ARj3hHTz3HjD/YtZXB8yrQwfM0xMH51MSB8xfJw7Wv0wcMH8afUDan/JU9cSB/nyzfjd4AWNiV7O5I7yuu90xnbKhLsBlU8xm9Eu8Dv/0tuXFM6O3N6ooOKj1SIGl9x+91qUPHvHWFRSFEe1n1OP1Gug5J77k7ffBQi2Iy6d5n6lPX/H2c8g9Zm/d5AUt7W+nWpXXqnawFn1e9JooeCqAKojq90u/Z3rdFFQVWPV7qN9HfXb02o591vudnfGm18Kp7m61eq5P/B1XS2gldGVnE+EScSJc5sAVXT5x4XLSwvV+DcrQH2q1Ki0a64WdWe944WNKDy8AffIvs9Htzd5vY/befWaD7zQbcIsXEhQwejQw635pqGUkUaeDhrq09raMJEKIWkZ0QHYtI4mDiFpGXBhR0GvltXAlBz0dtCsj6OVjUauTuv7e+EPidbrbe91nDvC6Qw+kzPvQe//0nuk9Stf6qOd/5SKzt2/zWtj0/s8e6p2/pla4/bVnp9f9qdYvvZdRz9nuZ17r3bjO3jlwcVM3v87Jmz/KbNobZh8/7b322iaFoANtsX0s8XdPj9X7qd+Nfzf2XtfhD3ndr/qiM/V17/QBtWzqC9HqWV5Xd0VeB+2PgrDOL9T6FIgXj/Oeo+R9L6h9Odj70qXgpi9eU3t6QU6BX13TarnW+//Rk97vp35WvZZRF7ZaYbUevbfq4lboU5e3PiMKfuoKL3KES8SJcBmzPd986271WO2BIbZrzzd+bZHRCexLP/UCo4KGAqKCocKGuiwr0gWXD8W1jFT1DuRqOdLB/OVzvQP66w29g7palxRuFWx1cFfYUYgY38XrqnQHzESIU5DSQVjnE6o7UwdMdRVvXeV1c1aU1qUWGD2fDtzaJr0P2u6ofaus8uxJ3pcAhfopr3n7G9f5kQojeu31/AfTlwSF4C61E1+grkh8WWpu9sGjiUD8nBdI9bnRa6iQqs8Kih7hEnEiXMZs8qINrtWyQefEAf1gopZEtSKWjEzdiqgWhVRF4Urdj1EHyVRFrYJqJdTBs8+1Zm/+2eztFmbvtjQb9oB3MI16rmyUXAS9fKR9Uyuh9l/vs1rQ1IqpoHygRef1qaVs9hCvJSvfqGVNrZZqKdT2qrWw61le96nCnLpS/1XT+3x2ONZruX7i6OwF00f/x1u/LhjROYj68qGWu8/f8n4H1y/wNxTIHOEScSJcxuylj+e7cNnmnTw/t0thSWFKIe6lc6IPggdadDBWt7K6mdXtrDCn7i11f6s7S+crAgCyhnCJOBEuY3br65NduHznszwbPF2BTuctqZtW3bhRobBLndKtiOqKC7ciqjVKLSyTXvbO3VJLy5fvelf76vwtXeACAIgd4RJxIlzG7KS2I1y4XLYhx1cW6uR6DcWibuZnf1M2SLY5zBsy5L17vW7wHRv8BwIADjaES8SJcBkjBUoFy9gHT1erpM6V1JA3amFMde7jKxd6F9fMHe4NpwMAKAiES8SJcBkjdYUrXN7yeiUOnq7he3TxhUKirn5NFSQ1LMlrl5sbX1HjLwIAChbhEnEiXMao9aCZLlx2zdbg6bqqVVdsa6BftTqmuguFrj7V3Tbe+at3PqRu1wYAKBqES8SJcBmj+p3HuHCp4YgOyDd7Ejsw2LtVWtvDo4OkxlfUrds0fImGC9I4d98W6XiaAACHcIk4ES5jogHTf3GfN3j6nq+/9WszpICoO8hoCJ9wkHzmBG9A7tHtvGGDNDg5AABJCJeIE+EyJhMXrHetlr9/LsPB03XLNN36rOvvSgfKDsd5rZKrv/AXBACgfIRLxIlwGZPnR89z4bLt4HJCobqvda9d3f/60Sr7AqXu/KHbI+pew3RxAwD2E+EScSJcxuTmHp+6cDl4+gq/JmT9fLP323r3og4CZdsfefeh1nmTuvUigHKt27HOFmxaYNNWT7PRS0fbO/PesddnvW4vfPaCjVw00jbu2ugviWK3fc9227J7i23YucHW7FhjK7ettKVbltqizYusZGOJzV4/275Y94VNXzPdpqyeYhNXTrSPl35sIxaNsHfnv2tvzX3L3pj1hr3y+Svu89VxckdrN7GdtRnXxu4fc7/9bdTfrMX7Leym4TfZjcNutL+M+IvdOvJWu/2D2+3OD++0v43+m7X6qJU9MOYB+8fYf7jHPTb+MXti4hP25KQn3fqenfKsTV6VvZFFCJeIE+EyJsHg6Ss2hYKi7lOtq7yDQKmibnDd13j7On8hoLht2LXBPl/7uT09+WlrPba13TXqLnfAvuqdq+z8N8+33/b6rZ3w2gkZld+//Xt7dPyjNnTBUFu9fbX/DBAFLQUsBSsFKoWp9xa+54JUjy96uBD11KdPuSCkYKTw9Odhf3YB6pYRt9ht799mf/3wry5Yab5CVhCcHp/wuAtfevwzU56xzlM7u/W9NOMlF9C0foW1vrP7uud7u+RtF+L0/Ppi8OGSD23MsjE2bvk4F/Q0PXj+YPv37H+7xz837TkXzB765CH3/NqeG4beYFe8fYVd+NaFdnqf0yM/D/leus3o5r87FUe4RJwIlzFYtG67C5a1H3/fr/E9U8MLlB1/5Y1LuXaOPwP5buvurbZ2x1pbtnWZzds4z7VyTF091SasmGCjloyyYQuH2dvz3nYHy55f9LSXZ7zsDoAdPu3gDrQ66Lb6uJVrxVCLhg7STYY2sWvfvdYaDW5kVw660hoMbGCX9r/UHRzP63eendX3LDujzxlW5406dmqvUyMPRvtbzuxzpjUc1NBtw8NjH3bb2G9OPxcsZq2b5VoDK5tajfTaKUzoYKowou3R/u/PfipA6PXSa9j0vaalyjXvXhP5mAvevMAFoT6z+7gWq4OdWmfV+jZj7Qwbu3ysDZk/xHp/2dtenP6itZ/U3h4c86BrPVPwUtA+u+/Zka9LoZdTXj/F/R7pM6Pfq3P7net+z/T5qT+wvvv9a/ROI/dZ0u+lPkMKrArP+rzo91ctjQrLnad1ti7TulRKUcjPFsIl4kS4jMGAqctcuGzeK/SHYsMiL1iqKxxl6CA5f9N8m7JqimuxeH/x++5A2X9uf3ewfPXzV+3Fz160f035lztoth3X1nUxqbtJLSg3D7+5TMDYn6KDikLXZQMuc8FOoS5bge5gLGohbDyksWs1VDiOOhBmUtRipfdJLY86mEc9V7hoGbWMKXCqlUthXa1Z41eMd4FeLW3q3szErq93uccpQCvMRz2fwoYCv7rTFa5zTV9iFBYVvvU7oOCvoKj3QJ91vY4K4fp8Ru3P/pR6veu511utfQqfzUY0c62Aag1Uq2BlhCi1YiqgqVVT+6T3WcFNrZ4KcdpHBTr9Tivc6bOg906tpn//6O9ueT1eraBq/VSrpz4f+nKkYKZWWH1G1Cqr97+YES4RJ8JlDB4c+LkLly9/HBo8XYOfK1z2beJXFD618k1fO9217CkkqjVPwVAteDonSWHuYGlJqf1Gbfvdv3/nWr4uH3i5Xf3O1S586eCnA1/LUS3tvo/vcwc/HZh1DtXznz3vuvB6zeplb85503Xr6Ryuj5Z+5Lr6FKR1jtfMdTPdQVEtaQoWOjiu2LbCnRumg6TClM4Zy8bBctX2Va6VS8FFob3T1E6udUvvh1q26vSuE7n/2SxqKWr+fnN3rpm6RvVlQvsdBwUQhVYFqdN6nVZm29T1HvXlozKLWq71hSZ5WzIpao1TQNQ69AVL4UxfvHT+nr6Q6fdO3cyfrvrU5myY41qNd+zhnO5iQLhEnAiXMbjk2Y9duJyyODR4+ls3eeFSd8zJc+t3rncXSKilUCFJYUmhSeFJIUrBQKEq6kAZlKgDYXlFB0l1UQUtKHd8cIdrqXjwkwfdOXMKImoF6zq9q2uxULfmwJKB7lw6nY+lLsFJKycdcPlszWf25fovbeGmhS7Y6TXY9tU2/xUpLgqxCnsKJGo97j6ze2QrVCZFAVutS3qNFWzyjc7tfG3ma661TC15UZ/NOIu6bxUW9cVFX1rUiqigqM+8vpyoJVafU84fRTqES8SJcFnJUg6eHtzzO8/Os1Q3nIKZWnN0bpbORYo66B1I0bp0HpO6t9Ttpe4steTp5H09p7o58zFwAGrdc1cX79p3dbFa4hW6dc5t8tXFyV9W9qdoHWqtVus0kC2ES8SJcFnJxs5b51otr+wy1q9JUKBUsFTAzCG1SGmoC7WCqPvs4v4XR4ZClevevc61WKorWxeo6Nw3XbCiC1fUza0LWXRemA6wOtjqwKsLXhRWAQC5RbhEnPImXK5fv95Wrsy81WrhwoU2ffp0fypzcYfLzh+UuHD56LuhiwPUFa5w2f9mv6LyqWtXQVIhUF3LOmk/KkSq6HwtXTmsiwfy4aIGAEDFEC4Rp7wIl/fcc4995zvfcaVRo0Z+bbQ2bdpY7dq19y5fvXp1u+OOO/y56cUdLv/cfZILl0NmhAZP10U8CpdTe/oV2aPuOV0tqasndbWluqHLu8pZV0Pf+/G97upYDT5d7FdUAkAhIlwiTjkPly1atLAaNWrY5MmTraSkxM4++2xr1qyZP7cshctOnTrZzJkzbc2aNdalSxcXMlWfibjD5a8fHubC5crNO/2ahHY/88LlxiV+xf7T1cUaQFjnLeocxnTDumigaV0UoCuBdZ6jWjCL9QIVACg2hEvEKefhskqVKta5c2d/yqxXr14uLCpsZqpatWp2xRVX+FPlizNczl+zzQXLOv/8wK9JWPm5Fyz/daJfkRm1KOoqaLVGRg2ZEpSL3rrIXb2tIX7Ura2u8DgGwgYA5C/CJeKU03CplkcFyQkTJvg1HtX17t3bn0pPYVFd65mIM1y+OXmpC5e3vREaPH3cc164HHS7X5Hazj07bfjC4W4gYw1JEg6RuvJa40NqeCAtM2s950YC8s327bYn8bdl98KFtnPmTNs+6VP7ev16fy6K3Tdbt9qedevsq2XLbNe8ebbzi1m2Y+pU2zZ+vG0dNco2v/eebRr4tm3s29fW9+hh67q+ZGs6d7Y1HZ+x1U89ZaueeMJWPfqYrWzdxlY89JAtv/c+W37P321Zy5a29PY7bGnzFrbkL81s8Z9vtEU3/MkV/ay6Jbc2Tyxzuy278y5b9re7E4+911Y88KCt+MfDtrJtW1v12OO2ql079zx6Pn12s4VwiTjlNFxOmTLFBUldzBOmug4dOvhT5Wvfvr07BzN5HYGPPvrIfvWrX+0tauU87LDD/LmV677+M1y4fGXMAr8m4Y0/eOFyel+/ojQNP6KxBHWXkORAqQG7Nb6jzo38NvEPhefbnTvtm23b7OtNm9wBcM+qVfbV8uW2e/Fi2z1/vu2aO9c7GM6Y4Q6IOvhsnzjxwMqkSbarpMS+3pzZHW7i9O2uXbZ70SK3jZsHD7Z1r7xia/71rK185BFb3qqVLb3tNlvctKktbPQHm3fJpTb3d2fZ7FNOtVnHHZ+ylJx/gTugb+j1RuI1/MJ/JgQUwL9assR2Jr6Ab5882baO/sg2DxlqG/u9aeu7v2Zrn+uSCD7tXRDS67jkllts0fU32KIm1x9Q0fundSz7653uPdV6Vz3+T1v99NPuuda93M3Wv/66e/5N77xjW0aMcNukELjlgw9s06BBtqF3b7fcmk6dXDBbcf8Dbn2Lb7rZFl3X2OZfXt9KzjnX5vy2duRnIt+Lgm22EC4Rp5yGy2nTpqUMlzqvMp1hw4a5ZQcnDj6pbNiwwcaOHbu3qEVUXfFxuLDjRy5cTluy0av4NhEIH/+pFy637hv0WMP1aFgfjSt58usnlwqUGh5IA5frLioo39ebN9tXK1a4ALYj8dlSMFEA2zF9ugsTu+bMcS0VCmpqtVBwU4D7euNG15rxTSLYBRTytL49q1e75d06E4Fu+6eTbdsnn9iW99+3ze8OsY39+9uGN3rb+ldftbUvvOACkFoeVrZpa8vvu99rzWiRCEI33pQ4oDaxBVddbfMvu9wFnbln/s7mnPZb+7LmbyIPLHGWL39zks278EJ30FdwUMuJAoXChYKGQke27Fm50r0nW0aMdEFvzTP/cq0/at2Zf9llNvvU0yK3MdtF+6xWJT2/Qove74OdPssK5Xp99TlVMNdrvPb5523VP//pXme1nrnglXit555+RuRrU8hl9km1bE6dulZy9tk276KLbcHvr7CFf/ij+yyodVGtj2qJVKukWihXP/mUC6+u9TLmot+9bCFcIk45DZcbE38IFQ6jusUHDBjgT0XTBTxarl+/fn5NZuLqFt+2a48LlqUGT1+W+EOhYNn5VDepO57oYpxwmFTRFdy6C0ehDwOk1imFO4W3nbO+dMFt66jRXmjr2y8R2Lr7rSXtbMU//pEIPX/zWksSAWjBFVfYvAsutLn1Ts+LcJbN4g5+p53mDoBzzzjTHQRLzjvfHQgVCHQwXNDwqsQB8Q8uJIRbg/arJB6rQKmQFbUdUUUtQPPrN4heXwal5OxzItcbVbTfC/94jTvY6yCv8L6+Z08X6DcnvlhuGzfefYlQ8Ffrrlp7y7Pz889d0FI3pD4/Uc85/9LL3JcCdYnqy4g+k/u+oMxydWpBLvMFJfHcyV9QsuGbLVu8sJjYBhfG//1vFxRXPvKoLburpQtE+kzosxK1P/tT9EVHrXxq7dNnQ1+I1Aqo1kC1Cq559lnXSqjWQrUa6gvW9sTf7shW8QzKtsSXfbVA6gvMpsTf+w19+ngtpC++6J5LraTqKtb7pS88rrVaLZLX3+C1eCb+HrjWzsRy+juhvxdq5dT69GVBwUx/V/TFSK2y+ntTzAiXiFPOL+jRleLdunXzp8y1Qh566KG2OPHHO5UDDZYSV7j8eO4aFy4bPh8aPH1MRy9cvvs3Nxm+wlv3ce48rbO7328+U5eta81bsMAdrHWQ0EGmVBdVxDdw7yB4uQsXOohFHdzyragFTS07CnYKHQuubOgddP/8Z9f6o4O7zpla+XBr1yq0+umOtrbL864Ld0OvXrbxzbdcy5FCwdaPP/ZaUj+bbrtmz3aBQS14rtV0R+7v7ayucXWRbxs7zh3odYDXgV2trgqxamWNeo0OpMypW88F5CXNmrnWIX0+FOa2fvSRC3AKa3FQ2FAAUSjROXAK2lHbW9Eyu9bJZb8snH9BxJeFP7rPlz5nJWedHbmudEWnBpSce55bhz6n2i99PnX+3rrE31l9JtW97E6JSHwOv1qxMi8+f6h8hEvEKefhsmXLli5gjhs3zg0vVK9ePWvevLk/11xX9pFHHmlLly5106+//vreYDlq1KhSJRNxhctnRs514fKx8ODpPa/wwuUXg2zuhrkuVF7S/xJ3C7l8oxYTHYDUkqAANb/B7yMPZhUtaglzrSVqDWucCG433ey1ljzwoDv/St1ROiiqVcOdd/XBB267dKGGLtjQhRscHOOjLxbuS0WpUw68Fr1MTznIZ9pWBWy1kurCDLW2KvTpdAaFQH1BUihUONQXJYVFhWWFR4XIqM94RYpalfX7oRZctdbp90JBUb+X+n1Q663OkdTvAVAewiXilPNwKffdd58dddRRLvQlD6I+NXEA0y/FqsTBSjRfY2FGlUzEFS6vf2WiC5fvfe7fdeibPWaPVvHC5Y6N1nV6VxcuHxv/mDc/h9R6odCmVjd1PanlI+pAp6KDnbqi1cqz4MorEwffJomD3q2u26pUF1Xi4Le3i+ojv4sq8doHXVRAMXAXaCUCa8oLtBKhXOE8fIHWjs8+c78nuuodyBbCJeKUF+EyTnGFy2Dw9PXbdnsVi8Z6wfKF092k7tWtcDlm2Rg3Xdl0IFPI0zlrm94eZKvbP+mu1pxTu05kiFTXtc5tUuuhzvPaMWWK6xIHABx8CJeIE+GyEsxZtcUFy3rtQoOnj27nhcthD9iGXRvsxNdOdLdl/Oqbr/wFKkZdw2oB0YUOOu9R57LpPMd0F1CoK0/n1K148EE3ppu6BOM65w0AEA/CJeJEuKwEfSYtceHy9t5T/ZqE7pd64XLOMOs/t79rtdRYlgdC57up63n531u588F0zldUcAwXXSCglkiFyHUvvewGC9a5cQCAwke4RJwIl5Xgnjenu3D56icLvYo9u8zaHm7W5lCz3dvsjg/ucOHy7ZLMftF1vpaGX9HVyeWNATjvwovcOG0apmT9az1s64cfuossAADFjXCJOBEuK8G5HUa7cPnZUn/w9PmjvFbLl89z3eDqDle3uLrHU9Gg37piVV3WySFS50nqqlGdC6lubFogAQDlIVwiToTLLAsGTz/2waH7Bk9/v60XLhP/f7T0I9dq2XhIY2+eT+dMbhn5vjtXssyYgsf/yo2Dp2F5NPSLu9MPAAAZIlwiToTLLPtw9moXLq9+YZxfk/DyeV64nD/a2o5r68LlyzNedkPy6CIa3QmjVJhMFN2lRQNYbxo4kKF7AAAVQrhEnAiXWdZh+GwXLv85xB88ffc271xLnXO5Z5ed2+9cFy5LNpbYwmuvKxUo511yqa164gnX1Q0AQLYQLhEnwmWWXffyBBcuh830B0+fO9xrtex+qbtXuIKlAqYGUVag1B0+dPGNpgEAqAyES8SJcJlF33z7rTvXUuFy7+Dpwx/0wuXo9tZlWhcXLp+Y+IQ7f1LhUvekBgCgMhEuESfCZRbNWrHZBcsz2n/o1yS8eIYXLhePt0aDG7lwOX7FeHdvYoVL3acZAIDKRLhEnAiXWdRrwiIXLv/axx88fcdG73zLR6vY6m0rXLDUMETbpk5xwVIBEwCAyka4RJwIl1nUsu80Fy5fG+sPnj7rHa/V8vWG1nd2Xxcu7x59t61s09aFy7XPP+8tBwBAJSJcIk6Eyyw668lRLlzOWLbJqxhyjxcuP/mXNX+/uQuX78552+ac5t1lZ89K/6IfAAAqEeEScSJcZsmG7btdsNQFPbqwx+lS24XLnUsmWq2etdxdedYMH+KC5cJrrvWWAQCgkhEuESfCZZaM+GKVC5eNXvTHqNT5lmq1fPyn9sHi912r5Q3v3WDL7rzLhcsNb/T2lgMAoJIRLhEnwmWWtHvvSxcu2w390qv4/C0vXPa51v4x9h8uXPb49EX7ssYJNuvXNezrzZu95QAAqGSES8SJcJklf+g63oVLtWA679zhwuW3E16wM/uc6cJlSc8XXavlkltu9ZYBACAGhEvEiXCZBZGDpz97kguX0+e87YLlRW9dZIuuv8GFy81DhnrLAAAQA8Il4kS4zAJdHa5geWYwePrW1V6XeLuf2bNTnnXhstPwNjbr+F/Z7Fon27e7dnnLAQAQA8Il4kS4zAKNa6lweee/p3kV097wwuWbTe3KQVe6cPnZ0w+7Vsvl997nLQMAQEwIl4gT4TIL7ug91YXLnuMWeRUDb3XhcvWEzi5Y1nmjjs279FIXLreN9a8mBwAgJoRLxIlwmQWnt/vAhcuZy/0rwJ+q7sLlG5O9LvF2b9ziguXceqebBWNgAgAQE8Il4kS4rKAyg6evn+91iScC5l9G/MWFywkP3OrC5ap27fxHAQAQH8Il4kS4rKDhM1e6cHnNS+O9ismvuXC5c0Azq9mjpv0mUebUrefC5c4vvvCWAQAgRoRLxIlwWUEfz11jVz0/1p4eMcerePPPLlwOH+0NnN722YYuWJacf4E3HwCAmBEuESfCZba1+5kLl/d/2NKFyzG3NHLhcu0LL/gLAAAQL8Il4kS4zKbVs1yw/LZTLXeFeK1uJ9iXtWq5cLln5Up/IQAA4kW4RJwIl9k0sasLl1MG/Mm1Wv6jzTkuWC66rrG/AAAA8SNcIk6Ey2z6dyJEJsLl08NudeHyoz9e6MLlhj59/AUAAIgf4RJxIlxmi4Yh8s+3vLz/pXb68yfYrF/92r6scYJ9vdkf/xIAgBwgXCJOhMtsWTHdBctlL9R2rZYP3emda7m0eQt/AQAAcoNwiTgRLrNlbCcXLnv0b+RdJX5RXRcuN7/3nr8AAAC5QbhEnAiX2dLrahcumw5oYOf9q4YLlrNrnWzf7tnjLwAAQG4QLhEnwmU26HzLx39qWx/5kbsrz2M3euFyxf0P+AsAAJA7hEvEiXCZDUsnuVbLId3qui7xSbV/48LltnH+LSEBAMghwiXiRLjMho87uHB5T9+L7arHvFbLufVO91o0AQDIMcIl4kS4zIYeDeybRLis8/qp9uw1v3bhcnX7J/2ZAADkFuEScSJcVtQ3e8werWIT2v/UfvPqCTb1JK/lcuesL/0FAADILcIl4kS4rKhFn7gu8Xbd69qfH/CC5fzLLvNnAgCQe4RLxIlwWVHzPjTrdr5d1KuOdW/wKxcu13Xt6s8EACD3CJeIE+EyC+ZtnGenvnSCff7r423W8b+yPStX+nMAAMg9wiXiRLjMgm4zutmdLb0LeRY1buzXAgCQHwiXiBPhMguaDG1i/c/3usQ39u3r1wIAkB8Il4gT4bKCNuzaYGd1PsG+SATLL2ucYF9v3uzPAQAgPxAuESfCZQX1n9vf/nGL1yW+9Lbb/FoAAPIH4RJxIlxW0NbdW23qeWe4cLll+HC/FgCA/EG4RJzyJlyuWbPGli1b5k9lZvr06bZjxw5/KjPZDpc7E+tTsJx98il+DQAA+YVwiTjlRbi877777Dvf+Y4rjRo18mtTmzlzpp122mlu+e9///vWpk0bf0562Q6X63v2dOFyxUMP+TUAAOQXwiXilPNwedddd1mNGjVs8uTJVlJSYmeffbY1a9bMnxutTp06LoQuX77cJkyYYD/84Q+ta4YDl2c7XMquOXNsV8k8fwoAgPxCuEScch4ujzjiCOvcubM/ZdarVy/XIqmwGUXhUPOnTZvm15jdcccddsopmXVLV0a4BAAgnxEuEaechkudZ6mgqNbHMNX169fPnypN9d/97nf9Kc+oUaPcY1ZmcGccwiUAoNgQLhGnnIbLKVOmuFC4fv16v8ajumeeecafKq1jx45Ws2ZNf8oThMupU6f6Nfto3jHHHLO3HHnkkfZ//+//LVVX0fLTn/7Ujj766Mh5FO/1UYmaR/GKXp+qVatGzqPwGcqk6PX55S9/GTmPwmdIjTKPPPKIf2QEKldOw6W6tlOFy06dOvlTpSl0pgqXn332mV+zz8aNG23ixIl7y/vvv2/PP/98qbqKFgXWHj16RM6jTLT777/fzjvvvMh5FK/ovOEBAwZEzqNMtNtuu82uuOKKyHkUr/yf//N/bOTIkZHzKBPtT3/6k91www2R84qh9O3b1xYuXOgfGYHKldNwqeCnUBjVLa4DbZSBAwem7BZfu3atXxMvtRakOkcU5i62ymQUgGL2ox/9yObPn+9PIVm7du3SXuhX7BQuk7+oYx+NSnLvvff6UwAqU84v6NGV4skX9Bx66KG2ePFiv6a0pUuXuiCZfEHP+eef70/Fj3BZPsJleoTL8hEu0yNclo9wCcQn5+GyZcuWLmDq/MtgKKLmzZv7c80++eQTdwGOQmWgbt26LqysWLFi71BEr7zyij83foTL8hEu0yNclo9wmR7hsnyESyA+OQ+X8tprr7mhhE444YQyJxyrhfLcc8+1VatW+TXm/oAqrOiArHlvvPGGPyc3+vTpQ7gsh14fFaSm14dwmRqfofT0+hAuU+MzBMQnL8IlAAAACgPhEgAAAFlDuAQAAEDWEC4BAACQNYTLCtLVh/Xr17fbb7/dxo8f79ce/HQB1fDhw+2pp56yNm3a+LVl3XPPPXbJJZfYTTfdZJMmTfJr99HtOq+//nq79NJLI9ezadMmN2KARgm49dZbbebMmf6cfXTBli7gatiwYbnbErf+/fu7/b7qqquse/futm7dOn/OPum2PZP979mzpzVt2tSuvfbayHXoMXfeeadbh/6PWkcu6PXRNqnoc6LPUzKNdduqVatyt137rH2vyP6nW0eu6bOj7YratnTbno39z2QduaBtjSphce1/unUA2IdwWQEKDQqWgwcPdn9sfvCDH9iQIUP8uQc37U/16tXdGKIaVzSK9l8l2H8NCfXee+/5c83V67G625IGv9eQU/rDHFCw0CgBWocGwtc6tIyGmAqoTuvQcEYaA1XzVZdr2gYdiLp16+YC9OWXX261atWy7du3+0uk3/YD2f8jjjii1Dp0dbAek7yO5cuX+0vkjl6fu+++2733urOWRnZ44okn/Lnetp900knlbns29j/dOvJBkyZN3OulEhbH/i9btsw9RnXhdag+1/R6aHuSSyDY9nT7r33Wvkftv5ZNt//hdeh11OsZXgeA0giXBygITuHWugYNGtjNN9/sTxUG/bHVfibT/v/4xz/2pzzaf4WJgIJ3+/bt/SlvzFKta+XKlW5aoSNqHeE/2gpfyevQH/lwAMsF3UY0bM2aNW7f1MoYSLftmey/DnLhdXTp0sWtIzh46mCXvA59KcjHA5/GGdS2B6K2PTjIB5L3X19U9nf/tY7w7WST15FrGoqtTp06keEyk/3X/oaF93/Hjh32ve99r8z+q07zRC3HUetQfa4F4TKVVNu+P/uvZcvbf73Wes2T16H3BkA0wuUBUldm8l2B9Efq5JNP9qcKQ6pwqRAZ1coS7L9aVPQ4PT5MdYMGDXI/B93BYVpH7dq13c+bN29OuY63337bn8of1apV2zuYfybbfiD7rxD7X//1X67LWVKt47TTTvOn8kfr1q3dF45Aum0v7zXMdP+DdSQLryOX1Dqmm0TojmTaj/C+ZLr/2t+w8P7rsan2P1ivnjNqHcmvay4E26ZtjWpJTbXt+7P/Wra8/ddrnWodeo8AlFX2NwYZ0Xl2N954oz/lUStClSpV/KnCkOqP8zXXXFOmlTa8/zpnSY9LPiCoRUCtb6KurKh1/OQnP3E/l7eOF154wZ/KD2rJ0LYGg+lnsu0Huv/HHnvs3lumRq1DB0YFlnygbVHRebn64jFy5Eh/TvS269zVYNtT7b/qMt3/YB3JwuvIpeuuu86djyoKM+FAl+n+a3/DwvufSbhKFdDC25Ir2oaf/exnduKJJ7pt1vTQoUP9uam3fX/2X8uWt/96rVOtQ+8RgLLK/sYgIxdffLHr5gvTH73//M//9KcKQ6o/ztr/5G6z8P7r4iY9Luh6CqhVLjjv7qKLLopch1rmZNy4cWnXkQ+C1yh8gMpk2w90/9WFWt46tB3BOnJN26Kiz4tCwquvvurPid52nbObbv9Vl+n+B+tIFl5HrqiV+/jjj/enyobLTPc//LmT8P5nEq5SBbTwtuTK2LFj/Z+8fVHLt043CaTa9v3Zfy1b3v7rtU61Dr1HAMoq+xuDjNxyyy2u9S5MrVfqGi0kqf44a/91BXRYeP/VzafHTZ8+3U0HDj/8cHv99dfdz7rSOmodwQFX95NPtQ618OWDrVu3ljlPUDLZ9gPdf7VslrcObUs4tOQL3R9cF33t3LnTTUdtu1qJ0u2/6jLd/2AdycLryJWf//zn7vcrKAozKkHoyXT/o4JRsP+pfn9VFzxPqoAWhKt8ou0K70+qbd+f/dey5e2/XutU69B7BKCssr8xyIj++FStWtW2bdvm15i1aNEiL/8gV0SqP87a/+OOO86f8ug0gWD/d+/e7U6CVzdnYOHChaX+qKdaR/i8PC0ftY7kC2pyIei2HDZsmF9TWrptz3T/dTV6IHg/wutIvrBAXaXhdeSLYP+nTp3qpjPZ9uT912kH+7v/Wj587//kdeSKfldSlUAm+6/9DQvvf/B5idr/4PdQzxe1jvB25IsOHTq4vyuBVNu+P/uvZcvbf73WqdYBIBq/HQcouIJQ40CKgoYOcvl2LmBFBX+ck+mK5yOPPHLvgS9q/3Xg+8Mf/uBPedM6XzAwbdo0t+5gHV988YVbh65ED2h8x/Affq0jHBxyRdusbdfBLpV0234g+68xQ8tbR/B+hdeRK+Ft0AVe2hd9IQuGa8pk27Ox/+nWkS8UZpIDXTb2X+eHJ69DdQGdB6vH6LESrCN8fmwuaDuCbRJ9OfnjH//ozt8NBNuebv+1z4Hk/dey6fZfr3l4HXo99d4AiEa4rAD9QdMfoZo1a9ohhxzirtwsFApC2rfkEv5jH+y/zoE67LDDIvdff4QVwjWeoYJl8gHr+eefd+vQQVVdpnreMHWvX3bZZS6U6PEKX/lwEr22N/y6BCW8/Zlse6b7f9RRR7lTDqLWEbxXWofGWk1eR65om/QFJHit1EobjBQQCO9/1LYHX1oqsv+ZrCMfaPtVwuLafz1Gj021jlwIQp5+f/Q3Rj+fddZZZQbjz3T/te8Huv/hdeh1jFoHgH0IlxWkFpkJEya4b9WFRH/YU5WwTPa/pKTEjQcaPoUgTMPraL3JV8WG6Q+5zj1LvrghV5Jfk3BJlm7bs7H/emy6deRC+HVZu3atX1tauv3XPmvfK7L/mawj14LXKVlc+59uHXHTublffvnl3telvO2KY/8zWQcAD+ESAAAAWUO4BAAAQNYQLgEAAJA1hEsAAABkDeESAAAAWUO4BAAAQNYQLgEAAJA1hEsgDwVj+82fP9+v8QT1lUnrTx7MO5d0v/UmTZq4barsfQcAVBzhEshDukPImWeeaVdffbVf41G40t1IKlMcz5EpDaStbdFtNrVd+xsuFUiT77YCAKhchEsgDykQKRgpWA0bNsyvLb5wqW2pUqWKtWjRYr+DpRAuASB+hEsgDwXh8s4777Tf/va3fm3Z4BcVnsJ1wfL9+/ffe3/m008/3c1r27atu+e57v3+xBNPuDoJHqN7x//mN79xPzds2LDMbfFefvlld199za9bt6517drVn7Nv+/X/Mccc435OpVWrVu5+zf/7v/9b6nn0WK07KKnWoe294oor3P3ttZzu+yzJj1cJ6DHaZtVpH9q3b+/P2bf/Tz31lLuX9CGHHFJm/1u3bm2/+93v9q63vP0DgGJDuATyUBDOli9fbt/97nftueeec/VB8AkEAS4sXBcsr2n9rHuUX3jhhW5aXc26J/yIESPcc0yePLnUY0499VQbOXKkm9Y6L7jgAjdfGjVqZFdeeaUNHTrUTStY6jHBdLD91157rS1YsCDlvecVLLWcnmP8+PH2hz/8odTzBM9dHs3v2bOnbd682U3rMQHNS359unXr5rZV4Vn0/9FHH713Otj/YLuCbdA+BzR/9OjRtmXLFjfdvXt39z8AgHAJ5KUgnAU/V61a1Xbt2rU3+ASiwlO4Llh+woQJblruuOMOO/zww23Hjh1+jbnWu6DlMXjMW2+95aZFoVF1gwcPtvXr17ufu3Tp4s/1hJ9X/2uZdevWuekoGzZscMv07t3brzHXOhg8jwTBrjyar3C5atUqv2afqNenXr16ZequueYa10oswf6/+eabblq0jarTNivw62ctF4RLAMA+hEsgDyn8hEOVuq4ffvjhvcEnEBWewnXJy0vyuiXqMQpSYdWrV3etnRMnTnTzo0qwjqjnSKaWUj1GYTVMQVfPI9qWdOvRc9WpU8etS8E5aD2V8H4FwtsbLsHzRO1/EKiD1t3bb7/dvR56X/72t7/ZmDFjXD0AgHAJ5KXkcKZu8e9973sZhcuTTz55b11FwuWUKVPctAThqkePHm54JP2sbuFUop4jWbCecKuq/Pd//7d7HskkXAaGDBlif/rTn9w6x40b5+qiXh+dZ6rzTVOJ2n9to+qSh4bq06eP3Xzzze41V8syAIBwCeSlqHCmC1VuvfVWF3ICOtdPF+oEPv30U7dcclAMi1p3OIQFj3n66afdtGisSYXbqVOnumk9x1133eV+DgvCV9RzRNHFRUErpSgUhp8nXbicN2+e/9M+Wj7oatdV5s2aNXM/B5o2bRq5zuA0gaj91zYGFwolB0zR8nPmzPGnAKC4ES6BPBQVzgYMGOBCTDgszpo1y44//nh3kY6ClLqU9bhshEtdwKJ6FU0H80Uh8Pvf/767Ylrdwpqn5fVYiXqOKIMGDXLnf2rZiy66qMzzpAuXmt+gQQP3mGAbzjjjDH+ud7GO1qlAGV6vroKvVauW/eUvf3H1eu2C+cH+n3XWWZH7r/l6Hk3//e9/t4svvtjOP/98Nw8AQLgE8pKCSxBmwqLqdVGJWup0gY2uBg8voyCUvHzUOlI9plevXu7n8HmMAbX0qVtYwxglLxNeXzrTp093267W0eHDh/u1nqjtT6arzIPn05igwVXjAdU9+eSTZdajsK4hiFSvYZUCQbhUS6Tq9djwWKOifdXj1KKp5bmwBwD2IVwCQEgQLgEAB4a/oAAQQrgEgIrhLygAhChcqgAADgzhEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFli9v8BHM980numNw4AAAAASUVORK5CYII=">
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAArMAAAGJCAYAAACZ7rtNAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAFhNSURBVHhe7d0JuNXUvf5x732ePm2tA7Xyby/iLS2I9aJUxRG1Uue5Tljn4lCqiHVGbK2ibVUUoYJoqVBAEAoKiMiMAjIjg+gBmZEZmQdBHH//864kh7BP9jkBzj575+T78VkPOyvZ2Ul2jnn3ykqynwEAAAAJRZgFAABAYhFmAQAAkFiEWQAAACQWYRYAAACJRZgFAABAYhFmAQAAkFiEWQAAACQWYRYAAACJRZgFAABAYhFmAQAAkFiEWaTS22+/bcOGDfOHKkarVq1cKWR9+/ZNxHLm07Zt26xr1645306F8D0UwjJUhNGjR7v10L8A0ocwi7zbb7/9spZcHZzOPvtsu/TSS/2h+Mo6aGp5CzkYaPkOO+wwa9SokSvZaFz4O6hVq5ZddtllNn78eH+K0u6991479thjrVq1aq7oteoyabtpntm+V42L2oa9e/e2M844w2rXrm3f/e537Sc/+YmdfPLJdvfdd9uSJUv8qTzhZc8s5e1PCxYscNPVqVPHbYdcfp+5nn8gyftsXOXtVwCqNsIs8i44oEaVXB2c9iXMZjtoBstciAYOHOiWe9KkSX5NdkHYDdancePGVrNmTff+yZMn+1N5Vq5caQ0aNCj5Dvv16+eKXqtO4zRNoLzQEcwnLJhXvXr1rEOHDjZ06FB74403rF27dtawYUO3rGHBPKJKeftTixYtXHivDJUZZrNt86jtnURatzjfL4CqiTCLvMvFAXXjxo02bdo0mzdvnn3++ed+7S65CLPZbN682T788ENX9Lo8mveKFSv8oXjK+wxtXy13HEGYDevUqZN7/x133OHXeIL59urVy6/ZRXWZ32152y9z+oULF7q6zOUJfPnll3bffff5Q57MeeyJqHUPaD/S/qT9SvtXeWbOnGlFRUX+UGn6nMxts3btWn+oNI3bk30j2MZlbfPwttq0aZNNnTrVPvvsMzccZc2aNW6abH9X+6q8v9tMUesEIH0Is8i7OOHjlFNOsfPPP98f2kVhQe9//fXX/ZpdASsoOjWt1rywzDAbvCdTuD4IBZklOKDqdeZ6qPUwc3rVhQUBSuFPrYLBdOVtk0B5n6F5Z44va97B8mTS+8L16ltao0aNMn8UaJym0bRSVrCSzGW74oor7LjjjrPt27f7NeUrb/2y0fsyS7Cc2n+0H4XHZX6GhlWvfVHdFKKmCdO21PjHH3/cDjnkkJL5vvTSS/4UHk2jVulgvIq6V2RSvaZVCeYXtW+oBPRa06v1PRinZR8yZIg/xS7XXnttyTQqWqZwWI/az4KS7fsO03KE3xP1dxvsm+F11LzL268AVG2EWeSdDkI6OJXl6aefdtNt2bLFr/Hcdttt9tOf/tQfMhs5cmTJ/DZs2OAOtjpQK1CFT3fvbZgNHzTDRYLPDQwaNMjV6fPVF1MlCA0aFwgO0AoLOnirD2iTJk3cdCNGjPCnihbnM7R8wXpkLnOUYHnCdOGY3t+2bVu/xmzChAmu7vnnn/drStM4TaNpRZ8bLEcUjQtvw1/84hduffZE5jzi0jIF6x5sIxXtN9p/tBzan7RfBdtT+1sgqKtfv35JIM22nqLP0T5Y1ncnmm/nzp1dK7U+u2fPnm6azO9IdZrf1VdfbRMnTnTLqjCrZdC48DoFVK/P1GfMnj3bhdigv3BYsG76V9tD+6n213PPPdefYtffR7hcddVVbn4fffSRP1W0uH+3wfdzwQUXuK4z+lsJPitYRwDpQ5hF3ukglK0Eli9f7oZ1sAv7zne+Yw8//LB7vXPnTqtbt64LE2E63Zv53r0Js1LWQTPzMzR/1c2ZM8evMfdadeHP1sFZdeoHGtApZdXdc889fk20uJ+Rbf2iBIFB71E577zz7Mc//rGdc8457vRv4OWXX3bzVB/ZbDRO02haKS90aFx4Gx544IFZLyQLlzDNI1spT7DuYcG2034UplZ07W/a7ySYTmEzjuB7L++7ixJ8lk77BzSsv4fMrgraPhqXuZ1E9SeeeKI/5FG3DdUHITIImnfeeacbDgQ/pLKt75/+9Cf73ve+Z+PGjfNrou3J322wzRYvXuzXeMpaRwBVX7yjG5BDwQFLB6LMEqZQdcQRR/hD5lqr9N4gZAT9K++66y43HKYr7MMhpTLC7M9//nN3AVQm1WlcQMsVHg6oPrzMUeJ+Rrb1ixJ8blD0vurVq5da52D7q4Usm+DCM00rZW0/0bjwNjzooINK9YkVTRcuYcE89BmZpTzBOodpWPtPposuush9lvY7Cbbx3Llz3XB5NN9s351aM8PUMt68eXPXIhksoz5L/VcDGj7rrLP8oV203hoXtf6qb9asmT/kyZw++J61fpnl9NNPd2cRMnXr1s29J9zCrFCcWaS8v9ubb77ZH/K22eGHH+4P7VLWOgKo+gizyLvgQFme7t27u2mDU+86xRkOA2Ud0IKDf6AywmzmcCBznkE4yZStPizuZ2QOlyXzc3UqV6FF7w+3BCpIqe6FF17wa0rTOE0ThK5g+0X1yVS/Wo0Lr4/6Zep0czZR65U5jz0Rtc01v6jvIfjsYF/Yk20smmfUcmbO59Zbb3XDaqHW/W/1eUE3lPB+qOGo+QXbPDxtIOo9mdNrvH4YBdsmqoSNGjXKvb9Hjx5+jadPnz6uPig/+9nPXH1Zy5c5/6jPk7LmAaDqi/9/XiBHdBCKOghH0WnU3//+9zZ//nz3vn/961/+GLOlS5e6ugceeMCv2UWnq8OhKDPMPvvss+69madoFZhVHyjroKn68HooiEWdLlYA17hAtgN0tvowzSdb6174M7Rc4fUoS9Tn6h6zer/6QAZ01btu2VXWKXGN0zTBFfKLFi1y83nuuefccFjQB1c/WgJ6v05Vhy80CotaLw3H3Z8yRa279hvtP5m0bPos7XeyJ9tY9DnlfXfLli1z88zsbqILwFQf3g+zrfee7LOSOf1rr73mhsPdYLLR96RWfPVxz/TVV1+VKlLe3224e0PU9yNlrSOAqo8wi7yLOqBmo1OOOvX497//3b1v69at/hiPgoD634XpYhhNq9tLBTLDbNC3MzgdLjNmzLD999/f1QeCMBZ1KyrVh9ejadOm7gKWsClTprjpNC6Q7QCdrT5M89H8NN9A1GfsSdDK9rna9ppH+MlpuiBMdXqoQSbVaVz4ojH1j1TYiZr/Qw895KZXy15A97VVXTiYh0Wtl4bj7k+ZotY9uC2Z9qMwfbfhMLon21j0OZo+qs9s8N0F+1vLli3dcOCkk05y9eHwpuGo9Q6CXpx9VjKDoW6VpWF1q4gS9KPWhVunnXZaqdu3xRH37zbq+5HMZQaQLoRZ5F1wQM1WwqZPn+6m1xOm1Ic206xZs9x4taapn6Fush/VepkZZoMr1oNl6dixo5166qnuterC1J9RfRfV11DjgwNo8N5AEAJ0gNcFUCpaFtVpXCDbATpbfVjwGZpvWZ8RtR7ZZPvcDz74wM3jN7/5jV/jCVoJ9RkKTCrB50XdQioIwApsCioDBgwomT7q9mvBOC2Trs4fPHiwe19woZJKmIb1nmylLNnWPWgt1f6k/Ur7lz5H+1tA885clrLoc/QePfhBF1Gp6LXmEf7u1BquaV955RW33hdeeKFdd911brpweAvWO5O6iWhceJ8NRL0nKhiqJV11wXem70BnM7RcwXRaTn2G5pdZwvOKEvfvNtv3E7XMANKDMIu8Cw5QUUUHwkyqP/PMM+3NN9/0a3anU50KBQcccIC7WEQH2XBfT8kMs6IDYfA+fYYOqvp8vQ7TRS1qRVRLlcYFB9Co5dWV3Aqzhx56qCt6nXl1t96X+RmSrT5TnM+IWo9syvpctc5qXLg1UV599VUXRNSlQEWvVZeNlk/9cPUDQq3fupdsly5d/LGl6UEBOg2tz9apZ91dQVfhq4U+8yb/wfJHlaj9KSyYLpP2H+1H2p+0f2g/yez6sCfbWILl0X6mgKowph9Qmd+dTu+rj+yPfvQj9whfvUf7nN4fDm/B/KLoB0Z4nw1EvSdq3hLez3SPVy2LuhOsXr3ajdd7spXMeUWJ83cbzC9TtmUGkA6EWQAAACQWYRYAAACJRZgFAABAYiU2zLZo0cIuv/xyd2W7+prFoT5Z6qenPne6cjZb/zIAAAAkQ2LDrDr7BxdCxA2zumBEF6boynXd8kehNnzbFwAAACRL4rsZxA2zwa1fws9X122DMm/9AgAAgORITZjV7W++//3v+0Oe4L1qqQUAAEDypCbM6kbt9evX94c8wXvDNygP6NGauudjUI4++mgXhsN1FAqFQqFU5XLQQQfZiy++6B8ZgcJEmC1+rx5bmunTTz91T7kJygsvvOD+qMN1FAqFQqFU5aKGnG7duvlHRqAwpSbMltXNIHiCTVk+/vhj99QhAADS4le/+lXWpy0ChSI1YVZhVNOFH0GpuyHEvQCMMAsASBvCLJIgsWFWITYoCqnB68CUKVOsXr16u13cdcopp7hbcy1ZssT1idUzwOPemoswCwBIG8IskiCxYVatqrrXbGYJTJ061Y455hhbtWqVX+M9NKFhw4YuxNauXXuPHppAmAUApA1hFkmQ+G4GlYUwCwBIm3yH2e3bt1MqoXzxxRf+Fk8mwmxMhFkAQNrkI8x+8803rovgvHnzbM6cOZRKKuqCuXHjRv9bSBbCbEyEWQBA2uQjzCpQ6Zi7YcMG+/LLL+2rr76i5Lhs27bNdctcuHCh/y0kC2E2JsIsACBt8hFmly5dypM582DHjh2uhVb/Jg1hNibCLAAgbfIRZufPn+9aZVH55s6da5s3b/aHkoMwGxNhFgCQNoTZdCHMVnGEWQBA2hBm04UwW8URZgEAaUOYTbZevXrZQw89ZBdddNFu9+LPhjBbxRFmAQBpQ5hNNgVYPSAqeFpqeRRmt2zZ4g8lB2E2JsIsACBtCLPlU1BUYNRTRrt06bLb00UHDRpkbdq0sd69e5dap8ynkAbzCZs1a5Z17NjRFc1f4zOnGT58uLVv395NM23aNL92d4RZOIRZAEDaEGbLp6CoFtB69eqVtITKtdde6wJkkyZNrFatWlajRg0bPHiwGyeZ4TIzcGraQw45xL1X89Aj+sPzF73WNDfccIPdcsstVqdOHRdsMxFm4RBmAQBpUyhh9oqOEyq1XP3yRP+Ty6egqCD74osv+jXmWmgVHidPnuzXmF111VUulAbKC7OaVu8JTJo0yY4++uiSMDtjxgzbf//93fsC+lyF5sx7xe5JmKXPbBVGmAUApE2hhNkT/jbSfvrw25VW9jTMHnjggf6Q56abbrKTTz7ZH/K0bt3aateu7Q+VH2br1q3r3hNWvXr1kjDbuXNn11L7+OOP71YOPfRQ1yUhjDALhzALAEibQgmzH67YbNOXbqy0Mm/NVv+Ty6egqFAZdumll1qzZs38Ic/AgQN3C73lhVlN269fP3/Ic/zxx5eEWfXFPfvss+2JJ55wRfVB0bzCCLNwCLMAgLShz2z5osLsww8/7FpRP//8c7/GrGXLlnb66af7Q+b6woZD59NPP71b4NS0zZs394fMVq1a5cYHYVYXl2l4zJgxbrgshFk4hFkAQNoQZssXFWYVCnW6X/1elyxZYl27dnXh9dlnn/Wn8LoiKJiuXLnSzeOSSy7ZLXB26tTJvUfvnTdvnvsMlSDMisZrONw3t2/fvv4rb9mConmHh6MQZqs4wiwAIG0Is+VTMMwMs6L6hg0b2gEHHOAuEGvXrp0/xqPbaDVt2tSFzFNPPdVNn9l6qqBas2ZN9/6zzjrL3dFAITesRYsWri+u3qvSuHFjf4z3/qA+XAizKUWYBQCkDWE2fxYtWuS/8ixYsMAF0ZEjR/o1FY8wW8URZgEAaUOYzR+1nqqVtU+fPq4/rfrgnnjiif7Y3CDMVnGEWQBA2hBm80dhVg9AUL/bK6+8cre+srlCmK3iCLMAgLQhzKYLYbaKI8wCANKGMJsuhNkqjjALAEgbwmy6EGarOMIsACBtCLPpQpit4gizAIC0yUeY1S2oCLP5QZit4gizAIC0IcymC2G2iiPMAgDShjCbfMOGDXOP0R0+fLhfkx1htoojzAIA0oYwm2x6YtgPfvAD97AFvf75z39uI0aM8MeWRpit4gizAIC04QKwZOvZs2fJtly5cqXVq1fPGjZs6IajEGarOMIsACBtCLPl05O6GjVq5J7QVbt2bdcCKmoBPf300+3AAw+0Y489ttQTvPSesGA+YS1atLCaNWva4Ycf7t4ffE5gxYoV7ulgyiea7uabb/bHRNNnVKtWzR8qjTBbxRFmAQBpUzBhdvWHZiunV15ZN8//4PIFIVTB9d1333V1CoWHHnqoexTtkiVLrGvXrlarVi3XdzUQhN6A5hOu07R6j967ePFiu/7660uF2XPPPdcef/xxKyoqstmzZ9sTTzzhwm02eu+ll17qD5VGmK3iCLMAgLQpmDD73BFmjx9UeaXL+f4Hly8Iofo38OCDD9ohhxxiO3bs8GvMmjdvvtsp/vLCrMKx3hNYtWqVGx+EWX0vGlZ9YOjQoa5OXQoy6X01atSwyZMn+zWlEWarOMIsACBtCibMdmpUuWUPw2xm94ALL7zQbr/9dn/I069fPzvggAP8ofLDrLon6D1hxx9/fEmYfeGFF+y4446zE044wU4++WQXlM844wyrW7fubsFadCcDzXvatGl+TTTCbBVHmAUApA19ZssXFWbVveC8887zhzydOnVy3QYCmWF24MCBu9UplLZu3dof8lSvXr0kzGbOL5u+ffu6+erf8hBmqzjCLAAgbQiz5YsKswqcCpCLFi3ya8xuu+02u+iii/wh7wIwBdKAxofDrALxVVdd5Q+ZTZw40Y466qiSMDty5Eg3vfrUZhMsR5wgK4TZKo4wCwBIG8Js+aLC7Pbt210XgOACLgXTzOCpfrV16tSxu+++212U9eSTT+4WZgcPHuz63Woeer+CrLoRBGFWzj//fHe7LdUNGDDA/asuBxJ0W9CyZZZsCLNVHGEWAJA2hNnyKTSGA2ZA66DAqv6zanXVdJmCIKwW2qj5zJo1yzp27OjKRx995C7geu211/yxHt1L9qabbnKtvnq/tp8E84sq2RBmqzjCLAAgbQiz+aMwGt4OGlZLa3kXce0LhdktW7b4Q8lBmI2JMAsASBvCbP4E4VUXgulhDEGXhVwizFZxhFkAQNoQZvNL96lVqFV3g/A9a3OFMFvFEWYBAGlDmE0XwmwVR5gFAKQNYTZdCLNVHGEWAJA2hNl0IcxWcYRZAEDaEGbThTBbxRFmAQBpQ5hNF8JsFUeYBQCkDWE2XQizVRxhFgCQNoTZZNPTxcp64lcmwmwVR5gFAKQNYTbZ9ibM8jjbSqZnHuuJGHpWcePGjf3a7PSFnn/++XbQQQdZvXr13LOS4yLMAgDShjBbPj3UQEWGDBmyW3gcNGiQtWnTxnr37l1qnYL3BMLzCehhCR07drS+ffu64ahphg8fbu3bt3fTafowwmyBa9asmQukekbxggUL3BfWtGlTf2y06tWruy915cqVbgfTY+LifsmEWQBA2hBmy6dwqQxy+umnuwY2vZZrr73W5YwmTZqUNLwNHjzYjRONC9N8wnWa9pBDDnHvVYPdhRdeWCqc6rXec/LJJ7txmn7gwIH+WMJswVMw7dChgz9k1rNnT/eFKtxGydxJJNgJ4iDMAgDSplDC7CPjHqn0Epfyhc74tm3b1q8x69Kli8sXkydP9mvMrrjiChdsA5n5IzOnaNqrrrrKHzKbOHGi/fCHPywJpzNmzLD999/funbt6oZFLbSnnXaaP0SYLWhr164ttZOI6nr16uUPlaZfLsHONm/ePNeye/fdd7vh8hBmAQBpUyhhtlGfRnZ0t6Mrrdw89Gb/k8unEPqd73zHlixZ4teY3XrrrXb00Uf7Qx6FyiOPPNIfKj/M1q1b11q3bu0PeX70ox+VhNNu3bq5XKL3hYvmoVAqhNkCNn36dPdlZe7sqlPflGwWL15sN9xwg9vBfvCDH5TaScLGjx9vJ554Ykk55phjrFq1av5YAACqvkIJsyM+GWFvLXyr0sqElRP8Ty6fAqRCY1hUiAyCZiD8WjLHH3jggdavXz9/yHP88ceXzFf/nnXWWa7o88JF85Ko5SgLYbYSzZw5M2uYVRN7lE2bNrn+LOprO2DAAHv++efL/JLXrVvnOlUHpXPnzq5rAwAAaUGf2fJFhdl77rnHnf0Ne/rpp61Bgwb+kLkGsqKiIn/IXGNcOMwqszRv3twfMlu1apUbH+QWXRT2/e9/3zXwZUOYLWAKpvpCo7oZ9O/f3x/anQKswqi6KARatmy5245TFroZAADSJh9hVt0Akx5mR4wY4fJF0J9VXRAuuOACe+ihh9ywXH755RZ0fdSZY80jnEmeffZZd/GX5rFo0SK75ppr7IwzzigJp7ofrM4yazjo4qDtpmuIAoTZAqdfPGotDejuBAcffLAtXbrUr9mdOmPrSsIwfcHacYK+JWUhzAIA0oYwW76oMCs6U6yMoUCqf3VBV7hfrTKIcskRRxzhxqulNRxmpUWLFlazZk07/PDD7dFHH7UTTjhht+6UupuT7nKg96nVV/8ee+yx/lgvzKouswTdEDIRZivZfffd5wKtru5TM33Dhg3tzjvv9Meaa7VVR2vdhisY1hf40ksvudbZMWPG2CWXXBK5A0YhzAIA0oYwu2+0Hsof4RCbScEyfNY4G7XeKseo8S6TQm22uzntCcJsHqibgH6xKGRmPjTh/ffft+OOO871MQnoyr+LL77Y3UKjdu3a7r60cVplhTALAEgbwmz+KOSqi4Ea33r06GG33367u4BdXS1zhTBbxRFmAQBpwwVg+aMwe+WVV7ozyPpX3RIyrxWqaITZKo4wCwBIG8JsuhBmqzjCLAAgbQiz6UKYreIIswCAtMlXmF2/fr0/hMryzTffuKxDmK3CCLMAgLTJR5hdvXp1mVf/Izd039o5c+bYV1995dckB2E2JsIsACBt8hFmP/vsMxeqFi5caJ9++imlEsqyZcvcNte/SUSYjYkwCwBIm3yEWdmxY4cLWWqhVdE9Viu6BPPe0xI1r8oquVqW5cuXu5ZZdTVIIsJsTIRZAEDa5CvMAnuCMBsTYRYAkDaEWSQBYTYmwiwAIG0Is0gCwmxMhFkAQNoQZpEEhNmYCLMAgLQhzCIJCLMxEWYBAGlDmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBITZmAizAIC0IcwiCQizMRFmAQBpQ5hFEhBmYyLMAgDShjCLJCDMxkSYBQCkDWEWSUCYjYkwCwBIG8IskoAwGxNhFgCQNoRZJAFhNibCLAAgbQizSALCbEyEWQBA2hBmkQSE2ZgIswCAtCHMIgkIszERZgEAaUOYRRIQZmMizAIA0oYwiyQgzMZEmAUApA1hFklAmI2JMAsASBvCLJKAMBsTYRYAkDaEWSQBYTYmwiwAIG0Is0gCwmxMhFkAQNoQZpEEhNmYCLMAgLQhzCIJCLMxEWYBAGlDmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBITZmAizAIC0IcwiCQizMRFmAQBpQ5hFEhBmYyLMAgDShjCLJCDMxkSYBQCkDWEWSUCYjYkwCwBIG8IskiDRYfbBBx+0WrVqWY0aNaxx48Z+bdlatmxpRx55pO23336utGrVyh9TNsIsACBtCLNIgsSG2WbNmlm9evVs2rRptmDBAmvUqJE1bdrUHxtN4w877DDr16+fGx49ejRhFgCALAizSILEhtnq1atbhw4d/CGznj17upZWhdsoqtf4QYMG+TV7hjALAEgbwiySIJFhdu3atS6YTp482a/xqK5Xr17+0O769u3rxs+aNcu16qpbguriIswCANKGMIskSGSYnT59ugumGzZs8Gs8qmvTpo0/tLt27dq58epf26RJE7vhhhvskEMOydrNYNy4cXbccceVlKOOOsqqVavmjwUAoOojzCIJEhlmZ86cmTXMtm/f3h/aneo1/uWXX/ZrzO69915Xt3HjRr9ml/Xr19s777xTUrp27eq6NgAAkBaEWSRBIsPspk2bXAiN6mbQv39/f2h3qtf4lStX+jVm8+bNc3XZ+tmG0c0AAJA2hFkkQWIvANOdDDp37uwPmbuw6+CDD7alS5f6NbtTvcaHLwALuh4sW7bMr8mOMAsASBvCLJIgsWH2vvvuc4F24sSJVlRUZA0bNrQ777zTH2s2adIkq127tq1YscKvMTdet/DSLbmGDBliderUsUsvvdQfWzbCLAAgbQizSILEhlnRAxBq1qzpQmbmQxPUdeDEE0+01atX+zUeTafp9b6495gVwiwAIG0Is0iCRIfZykSYBQCkDWEWSUCYjYkwCwBIG8IskoAwGxNhFgCQNoRZJAFhNibCLAAgbQizSALCbEyEWQBA2hBmkQR5DbN68pbuOqBbZRU6wiwAIG0Is0iCvIXZpk2bugcWKCAGYVb3ju3UqZN7XWgIswCAtCHMIgnyEmZfeeUVu+6666xPnz722GOP2ZgxY1z9hAkT3MMPChFhFgCQNoRZJEFewqweXNC1a1f3+s9//nNJmN22bZsdcMAB7nWhIcwCANKGMIskyEuYvfLKK13rrOgpXkGYHTp0qHsEbSEizAIA0oYwiyTIS5jVY2Qvuugimz59urVo0cKF2QEDBti1117rhgsRYRYAkDaEWSRB3i4Au/XWW90FYGeffbYde+yx7nWtWrX8sYWHMAsASBvCLJIgb2FWRowY4e5e0KZNGxs0aJBfW5gIswCAtCHMIgnyEmYbNWrkuhokCWEWAJA2hFkkQV7CbLNmzQizAAAUOMIskiAvYXb27NlWr14969y5sy1evNivLWyEWQBA2hBmkQR5CbNqldUFX1FFXRAKEWEWAJA2hFkkQV7CrB5fW1YpRIRZAEDaEGaRBHkJs0lEmAUApA1hFkmQtzBbVFTkuhvoaWB6vK2eBFaorbJCmAUApA1hFkmQlzA7fvx41z+2bt26dsMNN1jTpk2tQYMGri54zG2hIcwCANKGMIskyEuYveeeeyIv9FJLbf369f2hwkKYBQCkDWEWSZCXMKsgm61LgVpnCxFhFgCQNoRZJAEtszERZgEAaUOYRRLQZzYmwiwAIG0Is0iCvJ3TV6C97LLLXAttUHr37u2PLTyEWQBA2hBmkQSF2UG1ABFmAQBpQ5hFEuQlzA4YMMD1j82kuqj6QkCYBQCkDWEWSZCXMPvAAw/Yc8895w/t0rdvX9d3thARZgEAaUOYRRLkJcxyay4AAAofYRZJkJfkeMcdd1izZs38oV3atm3LrbkAACgQhFkkQV7C7OTJk6169erWvHlz10I7c+ZM+9Of/mQ1a9bk1lwAABQIwiySIG/n9NUKq0CrbgVBady4sT+28BBmAQBpQ5hFEuS1g+qOHTusqKjIPvzwQ9u8ebNfW5gIswCAtCHMIgkK4mqr5cuX29y5c/2hwkSYBQCkDWEWSVCpYVZ3MejYsaM/5Dn11FNLuhnUqVPHBg8e7I8pLIRZAEDaEGaRBJUaZg888EBbu3atP2Q2aNAg+9nPfmadO3d23Q3OOecca9KkiT+2sBBmAQBpQ5hFElRamJ03b55rfQ279NJL7fbbb/eHzLp06WJHHnmkP1RYCLMAgLQhzCIJKi3Mzpo1y4XZNWvWuOEFCxa44V69erlh0W26MgNvoSDMAkA67fzqG9vw2Re2dMN2m71yi01ZvMHenfupDZq10npNWWqvvLfI/jFqvv3t7TnWst+H1rzXDLul61T7badJduVLE+yyF8fbRS+8Z+e3G2tntRljZz472ho+846d/PdR1uCvI+2XTwy3eo8NsyMfHWo/ffjt3Yrq/q94XP1Ww+24J0fYiX8baac89Y6d0fpda/TcaDvn+TF2/j/es4vbj3Ofo8/rN32Fv+T7jjCLJKi05Kg7F9SoUcO6d+/uhtu3b2/HHXecex1QmOVxtgBybeP2L+zjVVts7Ly19p+py6z9Owvskf4f2m3d3rfLOoy3E4oDg4KEgsI9/5lpnccttvc/2ei/O702bf/Slm/cYR+v3mrTirfHe/PX2YjZa1yoe33acus5+RO3rV58d4E9P2KeC3ePvvmRPfT6LLu7OOD9/tVpdlOXKS7kXf/KZPf61m5TXf2dPae7ae7rM9NN/0hxKNR7n3hrtpvPM0M/tjbD59qzw+bak4Nm25/6f2T39/3A7nptut3e/X27sfNka/zPiXZJcajT96awp+B3THEIzAyIVb28NGah/43tO8IskqBSm0FbtWrlWl51IZj+7d27tz/Go/FRTwYrBIRZFILtX3xta7futMXrPrMPV2y2yYvW26R9KApz73z8qQ0vWm2DP1xlb32w0rXq9H1/mb02Zam9OvET6zJ+sf3rvUXuAKnQ127kfNcSpfCiIDN1yQabt2arfVq8XPmybedXtqI4ZBWt3LLbNpm4cL09URx8FHiufnminV4ccKIO/ntS1ML25wEfue3zUfF3kBTrtu20BZ9ucyF01JxP7Y3py61LcfBsN3KeC4cPvzGreDvNsN/9e4oLhRf84z0XCNUaGLUd0lbUeqrW1PPajnX7koK4AviDxcG71VtF1rY4vL9c/DfSY9In1n/GCve3of0v/PdWWWXV5h3+t77vCLNIgko/pz98+HAXWlesKH0aRPV9+/b1hwoLYRYBBUqdctQBQ6FSLXwzl23ap2CpgPiXNz+y+/t8YE1fnWY3vDLZnTI8+/kxdspTo+zox4dFHmALsei0qE6f6qB/TadJrtVNB3y1rik47U1RK9/jA4tcK6nC1hUdJ7hTrHsbtP7vL8Pc+6/71yQ3T7X6KbQr0Ie/l/EL1lmnsYusWXFo+dWz2YOwTu0qNOuHgIL9vvj8y69dC+jqLZ/bkvXF+9fqrfbB8k3u1HZ42cJFoempwXOsRXEg/UOPaa7lU6eedTr6qL+UPnW9t0X7ofZHtXz+pnj/1PbT6XSFOrWoqjVVLanPDPk48nusiPLCqPnuO+le/EOrT/GProHFP8D0Y2xM8Q8zbaMPiv8Wtc207bQN1Qq/o3ibYu8QZpEEhdlBtQARZpPns51f2Zrig9mitZ+5MKCDvlpL1Gqi1hO1NCokqVVFYeuO4hBwY+cprtVF/c8UJE975h3Xp61QwmTdPw9xAU4tZmo5UwuaAmM+i4LcvgTLiihBgFafRC2TArROVStA63tWyFTgW7h2m/sxsrfUAvze/HXW4Z0FrkvCScWfGbU8hViOfWKE66t5ecfxdnOXKe6Uvn5AqUVR4fC1yUvtzZkrXautQqFanfVjTS26hMH0IswiCQizMRFm80/h9JP122360o0ulOrCC/XNUxjVBRdqJVJr4PF/zW2oUqDUxRgKMmqt02de2mGcC5ZaBp1+VGtV0A9Qy6ZWKwXmoB+gltn1Axzi9QPU6Xu1DOr0vloHdfpfp4PnrtnqTp+rpS4pwqf81bKp9dEp+ZdGL4xsaYtX5ru+mIXUtSGwftsXbpn0PerHUEX00dSPJ/XbVbcItYKqH6j2L7XYK0Sr24T6i6rfqLoItB421/0tqM/qoFmrXOBWN5RlG7bblh3J2XdQeAizSALCbEyE2Yqlfp/qv6eLakbOWRPqvzd/txCjVtJTn37HBciog35Z5RePDnWtqmqNUj9HnXrVxSZqkVKo/PvgOe4KZIWk3lOXuYtYRs/1T1Uu3+SCpMKzWnc3FwcCXdEMAGlCmEUSEGZjIszuGbVWDflotQuMaq30+u+N2uf+e+oDqFYq9ZtUy5RapHRxkk4jj1uwzrUGqp8cAGDfEWaRBITZmAizZdMpzX+PX+KuhtYVv5khVH0adfo/aCVVv0adig9aSXXKPbOVVPdx1Kl2XcihU9cAgMpFmEUSEGZjIszuoquD1TVAV4ArlOp0fji4/qzlYLuwOLCqX6j6S2p6AKiqtn+13TZ+vtFWfbbKPtnyic3dMNdmrZ1lU1dPtYkrJ9q4FePs3WXv2shPRtrQJUPt7UVv25sL37Q35r9hfeb2sdfmvGbdZ3e3Lh91sX99+C97+YOXXdFr1Wlczzk9rffc3vb6vNdtwIIB9tbCt2zw4sE2fMlwG7V0lI1eNtp9jj5v5baV/pLtO8IskiDRYfbBBx+0WrVquYcxNG7c2K8t35IlS9x9bvfkaWNpDrNzVm1xF5boIia1rIaDa1B0hbTCra6E1oVaQJIs2bzEJq2a5IJCh5kdrOV7Le2hsQ/ZsCXDXDjB3nEhb+dGW/3Z6t1C3vtr3rfpn053r2evn+3qF2xa4KZZvnW5m37tjrXuvVu/2OrmE7bty222fsd6F9oWb15sczbMsZmfzrTJqybbmOVjXMBT2NP32WNOD+v8YWfrOLPjXhXtDzcOudEav9XYLh1wqZ33xnn2q//8yk557RQ7utvRBVle+fAVf0vtO8IskiCxYVYPV6hXr55NmzbNPRpXD2Jo2rSpP7ZsCr7BAxziSkuY3fr5V+4iKN2ySldO6zGKmcFVXQbUD1YXaE1YuJ4LoxJux1c7XGBQcFi3Y50LEiu2rXDBYuGmhS5oKHDMWjfLBRAFkTnr57ggoVBR6LZ8scU+3vCxjV0+1rVutZ3W1u4bfZ9dP/h6O+M/Z0SGgcxy8msn2+3Db7cXpr/gWtjWbPcey11V6Hv8dPunLhh+uO5Dm7J6ir2z9B0XCHt93MuFQYW6p6Y8ZX8e92e7d/S99vsRvy8V8s7sc2ZBh7xclQY9GthpvU+zs/qeZRf1u8iuGHiFXfv2tfa7ob+z24bfZn8Y8Qe7a9Rd9sd3/2j3j7nfWoxtYY+Me8T+MuEv1mpiK/v75L/bM1Oesefef87aTW/nhegZHdzrNu+3sdZTW7tpnpz0pD0+4XH3HTz83sP24JgH3b589zt3W7NRzdzn6PPUYltRCLNIgsSG2erVq1uHDh38IbOePXu6cKpwW5auXbvar3/9a8KsT7c3Uh9V3ZNTzwzPDK4qun+o7gKg+3WqDysqngLi8E+Gu9OHClydZnVyoeuvk/7qWgl1sNJB6reDfmuXDLjEHTRP6nlS5IE1H+WM3mfYxf0vthsG32B3jLzDHWh18NVBWS1jAxcOdC1mOu26N0WhSttDYUoHc81fB/E7R91ptw671ZoMbRJZFCailjezKIhpegWM9jPal7TKKUxou5/Y88RS71EQVoD456x/2viV411oLhTbvtjmfoyotVKnoPvO6+uWU9+JwpS22WUDLrPTe59ear2SUPR9aJ875/Vz3H531VtX2fVvX+/WS/ufwrb2Ee0rWufnpz1vL8580W0DBfNuRd3cfhmctu8/v/+u0/bFf4cK8tpf9b2qtVc/4orWF7nWY7Ucq9VYPwDTgDCLJEhkmF27dq0LopMnT/ZrPKrr1auXP1Ta6tWr7Sc/+YlNnz49tWFWt5zS7a900/R6Ea2uKroZv55T/8a05e6m6VWVDvhqiVQrpFog1fo449MZ7uA14pMRexQs1XIXddCl5L+o1Uwth2q1UiuYTsEOWjTIBRT1cYxLLdQKPwq8ClBRn3X+G+fb/aPvd8EpCMSVUdSK13hQYzu779mRy1VeUThs1KeRW69rBl3jQmHzd5pbi/dauNZAhcF/fvBPe3X2q9Zvfj93Gl/9M6eviQ55O7/O//1/UTEIs0iCRIZZhVEF0Q0bNvg1HtW1adPGHyrtlltusbvuusu9Li/Mvvfee3bMMceUlLp161q1atX8scmzfOMO+2PvGaWCq27wrttc6U4CurXVZ18UXn9Xnf5W4Jy3cZ4LmzqIqi+jDqo6uOogq1NxT0x8wh18dRDWwfi6t6+zKwde6Q7QOsirFeqEnidEHswruigcnNrrVNd6pJCgFiQFHS3Lb978jVsuhQa1Jt085GYXjnUK8k/j/uRakhROdPGHTvHqQhFdOKKQrT6GCt7qClBILUMKMVoufT86Da9l1kUrCnVaH51WVZi8Zdgt7vS0Ws/0o0AtaA+MecC1ounUqVrSFJ70nmenPutClE7tK7AphOr7ViujWnoVqNR6pu2iz1XXh0WbF7nuD/qRoh8rubRp5yZ30Y1ab9UCrNActS/ko2hZ1NqslnK1Hj86/lG3HfWdKMirf7C6XqhrAVAWwiySIJFhdubMmVnDbPv27f2h3fXt29cOO+ww277du5CgvDCreY8dO7akvPrqq65rQ9Js+OwL90z7cIB9+I1Z9p+py/b5GfJ7Qqdgl21d5lpxdLXtkMVD7D9z/+NaPRVadLBVuFEouPzNy11rZ67DgVpTdar43NfPdS2sV791tTv4xw6W6wozWCJ/Plr3kWvNz2w53ZMSnArvWtTVnQrXflfqVHhxkM88Fa4fOmodzbxYCtgXhFkkQSLD7KZNm1wQjepm0L9/f39od7pYTAE2XDS9/h09erQ/VXZJ62agOwroAq3/+4vXleDnjwx2j1NdtXmHP0XuqMVHB1+dllfrY1SQ3JOiFk61bKpFUy2ZatlTi55a8tSHUhem6OCvg74O9jrI6wKWD9Z+4JZlyeYlrrVuw+cb7LMvq263CQCoaIRZJEEiw6wonHbu3NkfMhs0aJAdfPDBtnTpUr9md5lBVqUqhtmvvv7WPeNfF20FLbG3d3/fPTo2F9QyqdOWal3Vlc1RYVRFFyupBVStn2r51MU76r+oU7S6j6LCry68UAhVAFX4JHgCQH4RZpEEiQ2z9913nwu0EydOtKKiImvYsKHdeeed/lhz9T/96U9txYoVfs3ugjAbV6GH2W+/NfdI19NCT9+66qUJNnPZJn+KfafT6TrFHlzhfUqv6FvwqIuAugyoC8GElRNy3ncRAJAbhFkkQWLDrLRs2dJq1qzpQmbmQxN0kZgCru5gEEVhVvemjauQw+w7H39q57cbWxJiz//He65uX3xb/N/8jfPd1du6l+Gv+/w6Mrjq3opNRzR1tzPShT9cUAIAVQdhFkmQ6DBbmQoxzKrVVa2vQYhVq2z/GStcK+2e+ubbb9zFK7r/olpVFVIzg6uu0L956M2uS4Eu4NIFXQCAqoswiyQgzMZUSGFW9369rdv7JSFW/WO7T9yzR25+8fUXNm3NNHfltC6oiroBvwKtgq0Crp4KpMALAEgPwiySgDAbUyGF2Uvaj3Mh9hePDrU2w+fa9i++9seUbenWpfaP6f+wm4bcVCq4quh+qLrnp+4KoHu6AgDSjTCLJCDMxlQoYXbr51+5IHvUX4a6e8jGoacc6RngmeFVN/HXvVT18AE9+hIAgDDCLJKAMBtToYTZEbPXuDDb+J8T/Zrs9FQmPU0pHGB1Oyw9PUtPSAIAoCyEWSQBYTamQgmzTwya7cJs2xHZuwHo8a+6SCt4gtYvu//ShVjuNAAA2BOEWSQBYTamQgmzF73wnguzExau92t20SNV1SdWdx1QiK3fvb49Mu4RW7Et+l67AACUhTCLJCDMxlQIYfazL76yn7UcbLUfGWw7v9p1ZwE9i/2fH/yz5CEGx3Q7xu4ffb97jCsAAHuLMIskIMzGVAhhNrO/7M6vd1rXoq52Ru8zSvrENhvVzD3sAACAfUWYRRIQZmMqhDD7pN9f9tnhs93ts8JP5dLjZYvWF/lTAgCw7wizSALCbEyFEGYv9u8ve/1bTUtC7PWDr7epq6f6UwAAUHEIs0gCwmxM+Q6z4f6yx/do4O5UMHb5WH8sAAAVjzCLJCDMxpTvMDtyjtdf9qJ/dnctspe/ebk/BgCA3CDMIgkIszHlO8z+7e05Lsze3O8pF2b/Oumv/hgAAHKDMIskIMzGlO8we4nfX/aaN29xYXbI4iH+GAAAcoMwiyQgzMaUzzC7q7/s23ZSz5NcmNWjagEAyCXCLJKAMBtTPsPsqDmf+v1le7kge0G/C/wxAADkDmEWSUCYjSmfYfbvg73+srf0a+PC7J/H/dkfAwBA7hBmkQSE2ZjyGWYv7eD1l73xrTtcmO0/v78/BgCA3CHMIgkIszHlK8yG7y/bsNdpLswu3brUHwsAQO4QZpEEhNmY8hVm3/nY6y978ctvuCB7Wu/T/DEAAOQWYRZJQJiNKV9h9im/v+xt/du7MPvAmAf8MQAA5BZhFklAmI0pX2H2sg7jvYu/3r7XhdleH/fyxwAAkFuEWSQBYTamfITZoL+sSqM+jVyYnbdxnj8WAIDcIswiCQizMeUjzI6eu9brL/vSQBdk9cAEAAAqC2EWSUCYjSkfYfbpIR+7MNu0fycXZu8adZc/BgCA3CPMIgkIszHlI8xe9qLXX7bpkIddmP33R//2xwAAkHuEWSQBYTamyg6z4f6yF/e/xIXZWWtn+WMBAMg9wiySgDAbU2WH2THzvP6yl3Qc7oJsgx4N7Otvv/bHAgCQe4RZJAFhNqbKDrPP+P1l7xzQ1YXZ24bf5o8BACTKlzvMdm4127HRbNunZltWmm1aarZhkdm6eWZrisxWzTJbMc1s2WSzT8abLRm392XTMv+D9x1hFklAmI2pssPsb/z+sncNe8yF2Zc+eMkfAyCnPltn9ukcL1DoNaoOhUoFSoVJBUmFSAVIfdcLRhX/j36Q2Yevm8141WzKv8wmvGA29lmzUU+YDXvEbNC9ZgPuMOv7O7Ne15q9+huzLuebdfqV2Ysnmf3jGLM2dc2ePtzs8YPyV95r46/wviPMIgkIszFVZpjd+dU3Jf1lrxx4lQuzU1ZP8ccC2CNffGa2cYnZ8qnFf8hvm03rVhxQnjMb0sLs9VvMul9q1vEUs2drRwcDBZQ3bvPCzaoP/JmmlIKgWhQVAtWCqABYNMALf5OKf3Ar+I34ixf6+t3uBb5uF3thr/3xXtD7+0+it3NVL3//Hy/ktv6Ztx3aHlW8b9X3tkvHk81ebuhtp1fO9gJy14v2vnzY1//C9h1hFklAmI2pMsPsWL+/7MUvjrL63evbL7v/0nZ+vdMfi0T7fIvZtjXF4eoTs7Ufm62cabZ0ktmiMWZzhxQHg/5mM18ze79LcTjoaPbe82bv/t1s+KNmgx8we/MuL1j95wazHld6B67O55r969dm/zzD7KVTzV480eyFY83aHW32/C/MnqtTfACtZfbUYWZ/K96How60e1qeqlE8/3rFn3m6Fwb73GT21h/NRrUyG9/ObHpxYJzzlnfKc/VHZptXeKFyXyhIqTVtxXSzhe94LWhTX/EClFrNBtzphafwQf3fF0Yvf1RR0FCoCL//X41KT6dtqHHvPGk2b1jxcm3yFzABtm8o3u/mei2R2temdDIb/bS3/QbeXRzum5j1vMrbbgpXClsKX5nbIMlF37PCpH6kqDVVAfLfF3itrNp/1Oqq1lcFcm0XtcpqH5vQ3vtBo+CufU+tuArz2pZq3VXA1/6pVl/tq2oFrgIIs0gCwmxMlRlmnxnq9Ze9a8BrrlX2hsHFwQUVa+e24oC1vDhofWi2+D2z2QO9FrtxxUFs9FNmIx/3AuTQh70QOegeL0j2/0NxmLy1+IB3sxcoX7vGC5UKdC78FIdKBcr2x3lBUiGyogIkZe/KX/+fF8oU0BTW9L0qmCiQfDLBCyBlUShZPNb7UaHvOOoz9ANCAUg/QhRq8kE/hqZ3934ADW3p/ehRQNP++NwR0cu9p+Xpmt5+rRD4ylnF+/1lxX8HN3rrPvjB4oD/1+K/obZe6Pugl/eDRttOYU8hWkFPP+iQGIRZJAFhNqbKDLOXd/T6y9474m8uzLadVnxwwC5ffW722driELLYC6MKJGoh++gNL5BO7OC1Nuk0cr/fFwfOxl7rZYcG2U8lV1ZRi6aWoe3/ecvz8mnFy3aOWbdLvOXsUxwMtMxv3e0t/8jHvHVRyJ78srd+s/7jhe/5w70grvXXKfSVM7xWUPX3XL/AO7WuwL51tdf38/PNXuuott++0sUsCia6eEUtpVoOtVR90NtrLVWgefdvxevwkNdiqsCjYBW1TeIWtSxXdmtwNksnev0SX708+rS5WjWD1t09LcEPIrWuKzg+89OK+UGkeekUtn6EKXjqR5srxfuX+oa+/2/v9LRCsVob1aVi/ULvTEIVaWXEniPMIgkIszFVVphVf9naj3j9Za9/+wYXZscuH+uPTTC1hG5dVRx+5nuhS4FD/RcVzKZ29sKaCz/FAe7NZl6oU1BQ3zH1Z1T4e+Z/ow/Se1MUDtoc6c1bLXa9r/M+d9ifvIO7gooCkoLx5H96y6iwNLOnt8wf9fMC5dzBXqhc+K63TsumeAFb66kgqdCtdUfVpkCvfaX39RW7n0aV4AeRgn2HE7xwrx9E6h4wsLnXgqxgqv1SgVRhFNhLhFkkAWE2psoKs+/N9/vLtn/X9ZVVn9ntX233xyaEbjEzravZ2/d5B9qoA/K+FrVW6WCuFiy1uva4wmupU1cAdQ3QAV3hQuFToVktaWqxVKAGKssXxX+7ahHfvt5rIVdrsfpLq+VcfabVgqx+02pZ1z6qH0R6rfqSH0TrctfCDJSDMIskIMzGVFlhtvWwuS7M/nHA665V9qq3rvLHFCgdkNVSqT56CpVRwVNFLaG6EEmnThVwdTpV/U3V/1Sniof/2WzMM8UB9EWvBVQtn/NHeAd4Hdh1ylyBAABQaQizSALCbEyVFWav6DjBhdkHRz7nwuxTU57yxxQAtSqpT6IujlKfRV0MkhlaVafuAeqLpyvO1bcSAJBIhFkkAWE2psoIs+H+srcMvdWF2eFLhvtjK5luLaOWUbWWqgVVraqZwbXVwd5FTOprqguTPp1t9u23/gwAAElHmEUSEGZjqoww+978dX5/2bHWoEcDF2Y379zsj80h9etTX73x//BuOaX7k2YGVxXd9FtX3ev+mrq4JEn31wQA7DHCLJKAMBtTZYTZZ/3+svcMGOiC7MX9L/bHVKCvvzBb/r53s3TdM1X3x2xVrXRwffJQs05nehdxzejht7p+488EAJAGhFkkAWE2psoIs1e95PWXfeSd9i7MPjbhMX9MBVg02ru/5BM/LB1cFWb15CPdD1T3CNVthhR6AQCpRphFEhBmY8p1mA33l71j5F0uzA5cONAfu5d0SyDdHUDP/g6HVz0NSU9C0lOQdHN0bogOAIhAmEUSEGZjynWYHbfA6y974Qvv2Uk9T3JhdsW2Ff7YPaSb9ut+q+GnBv21uvdkKd3UHwCAGAizSALCbEy5DrPPDff6y97/5lAXZH/d59f+mJi+2mk28zXvUZi7tcIe4z3JSncnAABgDxBmkQSE2ZhyHWavfnmiC7OPje7kwmyLsS38MeXQwwSGP2rWutauAKs+sK819u44wK2yAAB7iTCLJEh8mN2wYYOtXr3aHyrfkiVLbNasWf5QfLkMs0F/WXcng3fvd2G2z9w+/tgIuqvA3CFmPa707vUahNjWPzMb+ZjZpmX+hAAA7D3CLJIg0WH2wQcftP3228+Vxo0b+7XRWrVqZSeffHLJ9HXq1LG7777bH1u+XIbZ8X5/2Qv+8Z7rXqAwu2DTAn9sBF28FQRYFT1GdlYZ4RcAgL1AmEUSJDbMNmvWzOrVq2fTpk2zBQsWWKNGjaxp06b+2NIUZtu3b29FRUW2du1a69ixowu1qo8jl2H2+RHzXJh9aMA7Lsie1vs0f0yEndu8APvX/2f21h/NVn/kjwAAoGIRZpEEiQ2z1atXtw4dOvhDZj179nThVOE2rtq1a9vll1/uD5Utl2G28T+9/rJ/HdPNhdk/vlscUrMpGuCF2Vd/41cAAJAbhFkkQSLDrFpWFVwnT57s13hU16tXL3+ofAqn6qoQR67CbLi/7ENj/uTCbPfZ3f2xEfrd7oVZPcELAIAcIswiCRIZZqdPn+6Cqy7+ClNdmzZt/KGytW7d2vWhzZxHYOzYsXbUUUeVFLXiVqtWzR9bcSYsXO+C7Pn/eM8u6HeBC7NF64v8sRm++drsqcO8MLt1lV8JAEBuEGaRBIkMszNnzswaZtUvtjzDhg1z0w4aNMivKW3jxo02YcKEkqIWX3VtqGht/f6yDw8Y74Jsgx4N7BvdrSDK4ve8IPtyGX1qAQCoIIRZJEEiw+ymTZtcGI3qZtC/f39/KJou+NJ0ffv29WviyVU3g6C/7NNje7kw23RE9ovYbGhLL8yOfsqvAAAgdwizSILEXgCmOxl07tzZHzLXynrwwQfb0qVL/ZrS9jbISi7CbLi/7KPjnnBhttOsMvrCtjvaC7OrPvArAADIHcIskiCxYfa+++5zgXbixInudlsNGza0O++80x9rrmvAYYcdZsuXL3fDPXr0KAmyo0eP3q3EkYswO9HvL3te27F2+ZuXuzA7bU2WuzF8OtsLsm2O9CsAAMgtwiySILFhVlq2bGk1a9Z0ITPzoQkzZsxwf4Rr1qxxwxqve9FGlThyEWbbjfT6y7Z8c7ILsr/s/kv7+tuv/bEZxj7nhdm37/crAADILcIskiDRYbYy5SLMXtNpkguzbcb1d2H25iE3+2Mi/OvXXphd+I5fAQBAbhFmkQSE2ZgqOsyG+8v+bWJrF2ZfmP6CPzbD9g1ekP1b8efr9lwAAFQCwiySgDAbU0WH2SmLN7gge27bsXbt29e6MDtuxTh/bIb3/+2F2b6/8ysAAMg9wiySgDAbU0WH2b7vL3Nh9k8DZlj97vVd2f7Vdn9shp5Xe2F2Vh+/AgCA3CPMIgkIszHlos/sji+/tqELx7pW2WsGXePXZvhyh9mTh5o98UOznVv9SgAAco8wiyQgzMaUizArHWZ2cGG29dTWfk2GOW95rbLdLvYrAACoHIRZJAFhNqZchdkmQ5u4MDtq6Si/JkP/P3hhdtJLfgUAAJWDMIskIMzGlIswq3vKNujRwIXZzTs3+7Uh335r9tRhXpjdlP3JZgAA5AJhFklAmI0pF2F2xqczXJC9bMBlfk2GT8Z7QfalU/0KAAAqD2EWSUCYjSkXYfaVD19xYbbVxFZ+TYbhf/bC7DtP+hUAAFQewiySgDAbUy7C7B0j73Bh9u1Fb/s1Gdod7YXZFdP9CgAAKg9hFklAmI2posPsN99+Yyf1PMmF2bU71vq1IWs/9oJs61p+BQAAlYswiyQgzMZU0WF2zvo5Lsie98Z5fk2GcW29MDvoHr8CAIDKRZhFEhBmY6roMPvanNdcmH1k3CN+TYZXzvbC7PzhfgUAAJWLMIskIMzGVNFhdufXO23iyon24boP/ZqQ7RvMWh1s9rfiz/v6C78SAIDKRZhFEhBmY8rFBWBZTe/mtcr+50a/AgCAykeYRRIQZmOq1DDb67demP2gl18BAEDlI8wiCQizMVVamP1yh9mTh3rdDHZu9SsBAKh8hFkkAWE2pkoLsx8P8lpl/32hXwEAQH4QZpEEhNmYKi3MvtnMC7MT2vsVAADkB2EWSUCYjalSwuy333oPSVCY3bTUrwQAID8Is0gCwmxMlRJml07yguyLJ/oVAADkD2EWSUCYjalSwuyIv3hhduTjfgUAAPlDmEUSEGZjqpQw2+5oL8wum+JXAACQP4RZJAFhNqach9l1870gqz6z6jsLAECeEWaRBITZmHIeZsf/wwuzA5v7FQAA5BdhFklAmI0p52G2y3lemJ07xK8AACC/CLNIAsJsTDkNs9s3eE/8+lvx/L/+wq8EACC/CLNIAsJsTDkNszN6eK2yva/zKwAAyD/CLJKAMBtTTsOsQqzC7IxX/QoAAPKPMIskIMzGlLMwq24F6l6gbgbqbgAAQIEgzCIJCLMx5SzM6oIvtcp2PtevAACgMBBmkQSE2ZhyFmYH3u2F2fHt/AoAAAoDYRZJQJiNKSdhVg9H0EMSFGbXzfMrAQAoDIRZJAFhNqachNnlU70gq8fYAgBQYAizSALCbEw5CbMjH/fC7PBH/QoAAAoHYRZJQJiNKSdh9sUTvTC7dKJfAQBA4SDMIgkIszFVeJjdtNQLsuozq76zAAAUGMIskoAwG1OFh9mJL3phdsCdfgUAAIWFMIskIMzGVOFh9ssdZvOHm63+0K8AAKCwEGaRBITZmHLSZxYAgAJGmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBIkPs2vXrrUVK1b4Q/HMmjXLduzY4Q/FQ5gFAKQNYRZJkOgw27JlS9tvv/1cady4sV+bXVFRkZ144olu+h/84AfWqlUrf0z5CLMAgLQhzCIJEhtm7733XqtXr55NmzbNFixYYI0aNbKmTZv6Y6OdcsopLvSuXLnSJk+ebAceeKB16tTJH1s2wiwAIG0Is0iCxIbZGjVqWIcOHfwhs549e7oWV4XbKAqjGj9z5ky/xuzuu++2Bg0a+ENlI8wCANKGMIskSGSYVT9ZBVO1roaprm/fvv7Q7lT//e9/3x/yjB492r1n9erVfk12hFkAQNoQZpEEiQyz06dPdyF0w4YNfo1Hde3atfOHdte2bVurX7++P+QJwuyMGTP8ml007ogjjigphx12mP33f//3bnX7Wv7nf/7HDj/88MhxFG/7qESNo3hF26dWrVqR4yjsQ3GKts/Pf/7zyHEU9iE1Aj355JP+kREoTIkMs+oqkC3Mtm/f3h/anUJutjD7wQcf+DW7bNq0yaZMmVJSRo0aZS+99NJudftaFJC7d+8eOY4yxR555BE7++yzI8dRvKJ+3/37948cR5lid911l11++eWR4yhe+a//+i8bOXJk5DjKFPvd735nN998c+S4NJQ+ffrYkiVL/CMjUJgSGWYVNBVCo7oZ6MAeZcCAAVm7Gaxbt86vqVxqDcnWxxfmLs6Lc5eKNPvhD39oixYt8oeQ6Zlnnin3wtC0U5jNbBjALrprzsMPP+wPAShEib0ATHcyyLwA7OCDD7alS5f6Nbtbvny5C66ZF4Cdc845/lDlI8yWjTBbPsJs2Qiz5SPMlo0wCxS+xIbZ++67zwVa9Z8Nbs115513+mPNxo8f7y7YUogNnHrqqS4crVq1quTWXF26dPHHVj7CbNkIs+UjzJaNMFs+wmzZCLNA4UtsmJVu3bq5W2sdffTRpTqoqwX2rLPOsjVr1vg15v6HrXCkAKBxr732mj8mP3r37k2YLYO2jwqy0/YhzGbHPlQ+bR/CbHbsQ0DhS3SYBQAAQLoRZgEAAJBYhFkAAAAkFmEWAAAAiUWYzRNdHXvppZda8+bNbdKkSX5t8umCu+HDh9tzzz1nrVq18mtLe/DBB+3CCy+02267zaZOnerX7qLHD99000120UUXRc5n8+bN7o4WuovFHXfcYUVFRf6YXXSBny74u/LKK8tclsrWr18/t95XXXWVde3a1davX++P2aW8ZY+z/q+++qo1adLErrvuush56D333HOPm4f+jZpHPmj7aJlUtJ9of8qke023aNGizGXXOmvd92X9y5tHvmnf0XJFLVt5y14R6x9nHvmgZY0qYZW1/uXNA8C+I8zmgUKKguygQYPc/9wOOOAAGzx4sD822bQ+derUcffw1X19o2j9VYL11y3Shg4d6o81V6/36mluetiFbsGmA0FAQUZ3sdA89OALzUPT6JZrAdVpHrq9l+5BrPGqyzctgw58nTt3doH9kksuseOOO862b9/uT1H+su/N+teoUWO3eejqdb0ncx4rV670p8gfbZ8HHnjAffd6cp/uPPL000/7Y71lP/bYY8tc9opY//LmUQhuvPFGt71Uwipj/VesWOHeo7rwPFSfb9oeWp7MEgiWvbz11zpr3aPWX9OWt/7heWg7anuG5wGgYhBmK1kQ1MKtkZdddpndfvvt/lDVoP+5az0zaf0PPfRQf8ij9Vd4CSjot27d2h/y7hmsea1evdoNK+REzSN8kFDYy5yHDirhwJcPeixy2Nq1a926qRU1UN6yx1l/HVTD8+jYsaObR3Cw1sE1cx76EVKIB1rd51PLHoha9iBUBDLXXz+M9nT9NY/w47Ez55FvujXhKaecEhlm46y/1jcsvP47duyw/fffv9T6q07jRC3jUfNQfb4FYTabbMu+J+uvactaf21rbfPMeei7AVCxCLOVTKeGM586pv8pHn/88f5Q1ZAtzCq0RrUiBeuvFiO9T+8PU93AgQPd6+D0epjmcfLJJ7vXW7ZsyTqPN9980x8qHLVr1y55eEecZd+b9Vdo/t73vudO4Uu2eZx44on+UOF4/PHH3Q+cQHnLXtY2jLv+wTwyheeRT2r900Nh9MRDrUd4XeKuv9Y3LLz+em+29Q/mq8+Mmkfmds2HYNm0rFEtxdmWfU/WX9OWtf7a1tnmoe8IQMUp/ZeGnFI/yVtvvdUf8qiVpHr16v5Q1ZDtYHDttdeWaoUOr7/6nOl9mQcgtXiodVF0ajBqHj/5yU/c67Lm8fLLL/tDhUEtNVrW4OEZcZZ9b9e/bt26JY+AjpqHDsQKSIVAy6KiftX6oTNy5Eh/TPSyq+9xsOzZ1l91cdc/mEem8Dzy6frrr3f9iUXhKRwg466/1jcsvP5xwly2QBhelnzRMvzv//6vHXPMMW6ZNTxkyBB/bPZl35P117Rlrb+2dbZ56DsCUHFK/6Uhpy644AJ32jRM/5P97ne/6w9VDdkOBlr/zNOQ4fXXxXB6X3AqL6BWx6Df5Pnnnx85D7U8ysSJE8udRyEItlH4gBhn2fd2/XVKuqx5aDmCeeSblkVF+4tCyb///W9/TPSyq891eeuvurjrH8wjU3ge+aJW/F/84hf+UOkwG3f9w/udhNc/TpjLFgjDy5IvEyZM8F9566KWfXXfCWRb9j1Zf01b1vprW2ebh74jABWn9F8acuoPf/iDa50MU+ucTjVXJdkOBlp/XaEfFl5/nTbV+2bNmuWGA4cccoj16NHDvdadAKLmERzgly9fnnUeasEsBNu2bSvVz1PiLPverr9absuah5YlHJIKxTPPPOMuEvz888/dcNSyqxWsvPVXXdz1D+aRKTyPfPnpT3/q/r6CovCkEoSsuOsfFcSC9c/296u64HOyBcIgzBUSLVd4fbIt+56sv6Yta/21rbPNQ98RgIpT+i8NOaX/2dWqVcs+++wzv8asWbNmBXkA2BfZDgZa/yOPPNIf8qjbRbD+X3zxhbtoQqeNA0uWLNntIJJtHuF+lZo+ah6ZF2DlQ3AaeNiwYX7N7spb9rjrr7slBILvIzyPzAtRdOo5PI9CEaz/jBkz3HCcZc9cf3Xj2NP11/RB9w/JnEe+6G8lWwnEWX+tb1h4/YP9JWr9g79DfV7UPMLLUSjatGnj/r8SyLbse7L+mras9de2zjYPABWLv6pKFlzhqvuwioKNDqqF1pdzXwUHg0y6Iv+www4rOdBGrb8OtNdcc40/5A2rv2dg5syZbt7BPGbPnu3moTslBHR/1fCBRvMIB5V80TJr2XVwzaa8Zd+b9dc9e8uaR/B9heeRL+Fl0AWBWhf9AAxuXxZn2Sti/cubR6FQeMoMkBWx/urfnzkP1QXUj1nv0XslmEe4f3M+aDmCZRL9GPrtb3/r+l8HgmUvb/21zoHM9de05a2/tnl4Htqe+m4AVCzCbB7of6D6n179+vXtoIMOclcWVxUKXlq3zBI+uATrrz5s1apVi1x//U9foV/3E1WQzTxAvvTSS24eOojrFLQ+N0zdFS6++GIXgvR+hb1CuOhCyxveLkEJL3+cZY+7/jVr1nRdOKLmEXxXmofudZw5j3zRMukHT7Ct1Aod3MkiEF7/qGUPfiTty/rHmUch0PKrhFXW+us9em+2eeRDECr196P/x+j1mWeeWerhG3HXX+u+t+sfnoe2Y9Q8AOw7wmyeqMVp8uTJrtWgKtGBJFsJi7P+CxYscPfjDXfJCNPtpjTfzKu2w3TgUN/BzIth8iVzm4RLpvKWvSLWX+8tbx75EN4u69at82t3V976a5217vuy/nHmkW/BdspUWetf3jwqm/pWf/zxxyXbpazlqoz1jzMPAPuGMAsAAIDEIswCAAAgsQizAAAASCzCLAAAABKLMAsAAIDEIswCAAAgsQizAAAASCzCLFCFBPfWXLRokV/jCepzSfPPvHl/PrVv395uvPFGt0y5XncAQP4QZoEqRE8gOuOMM+zqq6/2azwKc3raUS5VxmfEpRvna1n02GAt156GWQXgzKc5AQAKE2EWqEIUwBTEFOSGDRvm16YvzGpZqlevbs2aNdvjICuEWQBIDsIsUIUEYfaee+6xk046ya8tHTSjwlq4Lpi+X79+Jc+3P+2009y4J554wurWrWs1atSwp59+2tVJ8J6+ffvaL3/5S/f6yiuvLPWYz1deecXq16/vxp966qnWqVMnf8yu5de/RxxxhHudTYsWLdzz7n/84x/v9jl6r+YdlGzz0PJefvnlVq1aNTednpsvme9XCeg9WmbVaR1at27tj9m1/s8995x7Fv9BBx1Uav0ff/xx+9WvflUy37LWDwAQD2EWqEKCMLhy5Ur7/ve/by+++KKrD4JWIAiMYeG6YHoN63VRUZGdd955blin7pcsWWIjRoxwnzFt2rTd3nPCCSfYyJEj3bDmee6557rx0rhxY7viiitsyJAhblhBVu8JhoPlv+6662zx4sXuc6IoyGo6fcakSZPsmmuu2e1zgs8ui8a/+uqrtmXLFjes9wQ0LnP7dO7c2S2rwrro38MPP7xkOFj/YLmCZdA6BzR+zJgxtnXrVjfctWtX9y8AYO8RZoEqJAiDwetatWrZzp07S4JWICqsheuC6SdPnuyG5e6777ZDDjnEduzY4deYa50MWlaD97zxxhtuWBRSVTdo0CDbsGGDe92xY0d/rCf8ufpX06xfv94NR9m4caObplevXn6NudbP4HMkCJJl0XiF2TVr1vg1u0Rtn4YNG5aqu/baa10ruATr//rrr7th0TKqTsusHxh6remCMAsA2HeEWaAKUdgKhzh1BXjsscdKglYgKqyF6zKnl8x5S9R7FNzC6tSp41pzp0yZ4sZHlWAeUZ+RSS3Beo/CcZiCtT5HtCzlzUefdcopp7h5KagHrcMSXq9AeHnDJficqPUPAnzQet28eXO3PfS93H///TZu3DhXDwDYe4RZoArJDIPqZrD//vvHCrPHH398Sd2+hNnp06e7YQnCXPfu3d3twvRap9mzifqMTMF8wq3G8qMf/ch9jsQJs4HBgwfb7373OzfPiRMnurqo7aN+wuovnE3U+msZVZd5q7TevXvb7bff7ra5Ws4BAHuPMAtUIVFhUBc23XHHHS5UBdRXUxd2Bd5//303XWYwDYuadzj0Be95/vnn3bDoXq8K0zNmzHDD+ox7773XvQ4Lwl7UZ0TRxWhBK6wohIY/p7wwu3DhQv/VLpo+6LqguyA0bdrUvQ40adIkcp5Bt4uo9dcyBheWZQZa0fTz5s3zhwAAe4MwC1QhUWGwf//+LjSFw+mcOXPsF7/4hbuoS8FNp+j1vooIs7rgSfUqGg7Gi0LnD37wA3dFv06za5ym13sl6jOiDBw40PXf1bTnn39+qc8pL8xq/GWXXebeEyzD6aef7o/1Lu7SPBVgw/PVXRqOO+44+/3vf+/qte2C8cH6n3nmmZHrr/H6HA0/9NBDdsEFF9g555zjxgEA9h5hFqhCFJSC8BQWVa+LkNQSqQuydLeC8DQKXpnTR80j23t69uzpXof7oQbUkqnT7LqtV+Y04fmVZ9asWW7Z1fo7fPhwv9YTtfyZdBeE4PN0T97grgYB1T377LOl5qMfB7oll+p1m7FAEGbV0qp6vTd8r1/Ruup9arHV9FwIBgD7jjALABUgCLMAgMrF/3kBoAIQZgEgP/g/LwBUAIVZFQBA5SLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgos/8PAKXZKAZp7rQAAAAASUVORK5CYII=">
## Tokenizer
Le tokenizer de départ est [BarthezTokenizer](https://huggingface.co/transformers/model_doc/barthez.html) auquel ont été rajouté les tokens spéciaux \<sep\> et \<hl\>.
## Utilisation
_Le modèle est un POC, nous garantissons pas ses performances_
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import Text2TextGenerationPipeline
model_name = 'lincoln/barthez-squadFR-fquad-piaf-question-generation'
loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
nlp = Text2TextGenerationPipeline(model=loaded_model, tokenizer=loaded_tokenizer)
nlp("Les projecteurs peuvent être utilisées pour <hl>illuminer<hl> des terrains de jeu extérieurs")
# >>> [{'generated_text': 'À quoi servent les projecteurs sur les terrains de jeu extérieurs?'}]
```
```py
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import Text2TextGenerationPipeline
model_name = 'lincoln/barthez-squadFR-fquad-piaf-question-generation'
loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "Les Etats signataires de la convention sur la diversité biologique des Nations unies doivent parvenir, lors de la COP15, qui s’ouvre <hl>lundi<hl>, à un nouvel accord mondial pour enrayer la destruction du vivant au cours de la prochaine décennie."
inputs = loaded_tokenizer(text, return_tensors='pt')
out = loaded_model.generate(
input_ids=inputs.input_ids,
attention_mask=inputs.attention_mask,
num_beams=16,
num_return_sequences=16,
length_penalty=10
)
questions = []
for question in out:
questions.append(loaded_tokenizer.decode(question, skip_special_tokens=True))
for q in questions:
print(q)
# Quand se tient la conférence des Nations Unies sur la diversité biologique?
# Quand a lieu la conférence des Nations Unies sur la diversité biologique?
# Quand se tient la conférence sur la diversité biologique des Nations unies?
# Quand se tient la conférence de la diversité biologique des Nations unies?
# Quand a lieu la conférence sur la diversité biologique des Nations unies?
# Quand a lieu la conférence de la diversité biologique des Nations unies?
# Quand se tient la conférence des Nations unies sur la diversité biologique?
# Quand a lieu la conférence des Nations unies sur la diversité biologique?
# Quand se tient la conférence sur la diversité biologique des Nations Unies?
# Quand se tient la conférence des Nations Unies sur la diversité biologique?
# Quand se tient la conférence de la diversité biologique des Nations Unies?
# Quand la COP15 a-t-elle lieu?
# Quand la COP15 a-t-elle lieu?
# Quand se tient la conférence sur la diversité biologique?
# Quand s'ouvre la COP15,?
# Quand s'ouvre la COP15?
```
## Citation
Model based on:
paper: https://arxiv.org/abs/2010.12321 \
github: https://github.com/moussaKam/BARThez
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
|
macedonizer/hr-roberta-base | 16cb3b1be4f0e27db649b0e9fb789ec4a29493aa | 2021-09-22T08:58:43.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"hr",
"dataset:wiki-hr",
"transformers",
"masked-lm",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | macedonizer | null | macedonizer/hr-roberta-base | 77 | 1 | transformers | 5,142 | ---
language:
- hr
thumbnail: https://huggingface.co/macedonizer/hr-roberta-base/ivo-andric.jpg
tags:
- masked-lm
license: apache-2.0
datasets:
- wiki-hr
---
# HR-RoBERTa base model
Pretrained model on Macedonian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between скопје and Скопје.
# Model description
RoBERTa is a transformers model pre-trained on a large corpus of мацед data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. For tasks such as text generation, you should look at models like GPT2.
# How to use
You can use this model directly with a pipeline for masked language modeling: \
from transformers import pipeline \
unmasker = pipeline('fill-mask', model='macedonizer/hr-roberta-base') \
unmasker("Zagrab je \\<mask\\> glavni grad Hrvatske.") \
[
{'sequence': 'Zagreb je glavni grad Hrvatske.', 'score': 0.8750431537628174, 'token': 2026, 'token_str': ' glavni'},
{'sequence': 'Zagreb je najveći grad Hrvatske.', 'score': 0.060711536556482315, 'token': 2474, 'token_str': ' najveći'},
{'sequence': 'Zagreb je prvi grad Hrvatske.', 'score': 0.005241130944341421, 'token': 780, 'token_str': ' prvi'},
{'sequence': 'Zagreb je jedini grad Hrvatske.', 'score': 0.004663003608584404, 'token':
3280, 'token_str': ' jedini'},
{'sequence': 'Zagreb je treći grad Hrvatske.', 'score': 0.003771631745621562, 'token': 3236, 'token_str': ' treći'
] \
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import RobertaTokenizer, RobertaModel \
tokenizer = RobertaTokenizer.from_pretrained('macedonizer/hr-roberta-base') \
model = RobertaModel.from_pretrained('macedonizer/hr-roberta-base') \
text = "Replace me by any text you'd like." \
encoded_input = tokenizer(text, return_tensors='pt') \
output = model(**encoded_input) |
nateraw/baked-goods | 7a9e8cc833a401404a6f752336275c94423f7479 | 2021-06-30T07:11:09.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | nateraw | null | nateraw/baked-goods | 77 | null | transformers | 5,143 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: baked-goods
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.875
---
# baked-goods
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cake

#### cookie

#### pie
 |
nateraw/ex-for-evan | 7b15f15ff925539baa48729cedd6dd1d78dd138f | 2021-09-18T22:20:05.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | nateraw | null | nateraw/ex-for-evan | 77 | null | transformers | 5,144 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ex-for-evan
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9791666865348816
---
# ex-for-evan
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cheetah

#### elephant

#### giraffe

#### rhino
 |
nateraw/vit-base-beans-demo-v2 | 224661166c75ca3a6f14c79ade1848723a9f9ede | 2021-08-27T17:33:08.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:beans",
"transformers",
"other-image-classification",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nateraw | null | nateraw/vit-base-beans-demo-v2 | 77 | null | transformers | 5,145 | ---
license: apache-2.0
tags:
- image-classification
- other-image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans-demo-v2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0099
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0705 | 1.54 | 100 | 0.0562 | 0.9925 |
| 0.0123 | 3.08 | 200 | 0.0124 | 1.0 |
| 0.008 | 4.62 | 300 | 0.0099 | 1.0 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
ozcangundes/wav2vec2-large-xlsr-53-turkish | 2993cf6895d0369dbddb96a28f0bafda2cc60343 | 2021-04-02T14:54:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ozcangundes | null | ozcangundes/wav2vec2-large-xlsr-53-turkish | 77 | 1 | transformers | 5,146 | ---
language:
- tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Ozcan Gundes XLSR Wav2Vec2 Large Turkish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 29.62
---
# Wav2Vec2-Large-XLSR-53-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\’\\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.62 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1hesw9z_kFFINT93jBvGuFspOLrHx10AE?usp=sharing) |
peterbonnesoeur/Visual_transformer_chihuahua_cookies | 0d64f408539e5983424b945386bbd2f10c154bea | 2021-07-02T11:29:45.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | peterbonnesoeur | null | peterbonnesoeur/Visual_transformer_chihuahua_cookies | 77 | null | transformers | 5,147 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Visual_transformer_chihuahua_cookies
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
# Visual_transformer_chihuahua_cookies
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### chihuahua

#### cookies

#### corgi

#### samoyed

#### shiba inu
 |
sberbank-ai/ruclip-vit-base-patch16-224 | bfba4ada5bd6bc10d1b4e42deef0770f5c38388e | 2022-01-09T21:34:11.000Z | [
"pytorch",
"transformers"
] | null | false | sberbank-ai | null | sberbank-ai/ruclip-vit-base-patch16-224 | 77 | null | transformers | 5,148 | # ruclip-vit-base-patch16-224
**RuCLIP** (**Ru**ssian **C**ontrastive **L**anguage–**I**mage **P**retraining) is a multimodal model
for obtaining images and text similarities and rearranging captions and pictures.
RuCLIP builds on a large body of work on zero-shot transfer, computer vision, natural language processing and
multimodal learning.
Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
* Task: `text ranking`; `image ranking`; `zero-shot image classification`;
* Type: `encoder`
* Num Parameters: `150M`
* Training Data Volume: `240 million text-image pairs`
* Language: `Russian`
* Context Length: `77`
* Transformer Layers: `12`
* Transformer Width: `512`
* Transformer Heads: `8`
* Image Size: `224`
* Vision Layers: `12`
* Vision Width: `768`
* Vision Patch Size: `16`
## Usage [Github](https://github.com/sberbank-ai/ru-clip)
```
pip install ruclip
```
```python
clip, processor = ruclip.load("ruclip-vit-base-patch16-224", device="cuda")
```
## Performance
We have evaluated the performance on the following datasets:
| Dataset | Metric Name | Metric Result |
|:--------------|:---------------|:--------------------|
| Food101 | acc | 0.552 |
| CIFAR10 | acc | 0.810 |
| CIFAR100 | acc | 0.496 |
| Birdsnap | acc | 0.117 |
| SUN397 | acc | 0.462 |
| Stanford Cars | acc | 0.487 |
| DTD | acc | 0.401 |
| MNIST | acc | 0.464 |
| STL10 | acc | 0.932 |
| PCam | acc | 0.505 |
| CLEVR | acc | 0.128 |
| Rendered SST2 | acc | 0.527 |
| ImageNet | acc | 0.401 |
| FGVC Aircraft | mean-per-class | 0.043 |
| Oxford Pets | mean-per-class | 0.595 |
| Caltech101 | mean-per-class | 0.775 |
| Flowers102 | mean-per-class | 0.388 |
| HatefulMemes | roc-auc | 0.516 |
# Authors
+ Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov)
+ Daniil Chesakov: [Github](https://github.com/Danyache)
+ Denis Dimitrov: [Github](https://github.com/denndimitrov)
+ Igor Pavlov: [Github](https://github.com/boomb0om)
|
simonlevine/clinical-longformer | ff576e64e4a788b329bd75987e18240ed5752f22 | 2021-05-20T21:25:09.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonlevine | null | simonlevine/clinical-longformer | 77 | null | transformers | 5,149 | - You'll need to instantiate a special RoBERTa class. Though technically a "Longformer", the elongated RoBERTa model will still need to be pulled in as such.
- To do so, use the following classes:
```python
class RobertaLongSelfAttention(LongformerSelfAttention):
def forward(
self,
hidden_states,
attention_mask=None,
head_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
output_attentions=False,
):
return super().forward(hidden_states, attention_mask=attention_mask, output_attentions=output_attentions)
class RobertaLongForMaskedLM(RobertaForMaskedLM):
def __init__(self, config):
super().__init__(config)
for i, layer in enumerate(self.roberta.encoder.layer):
# replace the `modeling_bert.BertSelfAttention` object with `LongformerSelfAttention`
layer.attention.self = RobertaLongSelfAttention(config, layer_id=i)
```
- Then, pull the model as ```RobertaLongForMaskedLM.from_pretrained('simonlevine/bioclinical-roberta-long')```
- Now, it can be used as usual. Note you may get untrained weights warnings.
- Note that you can replace ```RobertaForMaskedLM``` with a different task-specific RoBERTa from Huggingface, such as RobertaForSequenceClassification.
|
meame2010/rare-puppers | 566d1b12a7ae078d04bbf59dc491c23ee4e4ad1e | 2022-03-07T00:03:06.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | meame2010 | null | meame2010/rare-puppers | 77 | null | transformers | 5,150 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.644444465637207
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### dog drinking water

#### dog eating food

#### dog playing toy

#### dog sleeping
 |
Thanakrit/wangchanberta-th-QA | a84118efcaac38c79f30b93d44e4a375de3fcaf6 | 2022-03-26T13:49:54.000Z | [
"pytorch",
"camembert",
"question-answering",
"thai",
"th",
"dataset:thaiqa_squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Thanakrit | null | Thanakrit/wangchanberta-th-QA | 77 | null | transformers | 5,151 | ---
tags:
- generated_from_trainer
language:
- thai
- th
datasets:
- thaiqa_squad
model-index:
- name: wangchanberta-base-att-spm-uncased-finetuned-th-squad
results: []
widget:
- text: "เฝิง เส้าเฟิง รับบทอะไรใน The Palace"
context: "เฝิง เส้าเฟิง เฝิง เส้าเฟิง หรือ วิลเลี่ยม เฝิง (; ชื่อภาษาอังกฤษ: William Feng, Feng Shaofeng) เป็นนักแสดงที่มีชื่อเสียงจากเรื่อง \"White Vengeance\" และ\"Prince of Lan Ling\" เกิดเมื่อวันที่ 7 ตุลาคม ค.ศ. 1978ประวัติ ประวัติ. ดังเปรี้ยงปร้างเพียงชั่วคืน หลังจากกระโดดมารับบท \"องค์ชาย 8\" ในซีรีส์เจาะเวลาทะลุมิติเรื่อง \"The Palace\" คู่กับหยางมี่ในปี 2011 จนตอนนี้เฝิงเส้าเฟิงกลายเป็นพระเอกที่ถูกพูดถึงมากที่สุดคนหนึ่งของวงการบันเทิง และกลายเป็นแบบฉบับของชายหนุ่มที่สาวๆ ใฝ่ฝันถึง เพราะนอกจากหน้าตาที่หล่อเหลาแล้ว ชาติตระกูลของเขาก็ยังไม่ธรรมดาอีกด้วย เฝิงเส้าเฟิง เป็นลูกชายหัวแก้วหัวแหวนของนักธุรกิจอุตสาหกรรมสิ่งทอรายใหญ่ของจีน ครอบครัวเขามีโรงงานตั้งอยู่ที่เวินโจว กว่างโจว และฝูโจว ทรัพย์สินโดยรวมทั้งสิ้นไม่ต่ำกว่าพันล้านหยวน และเขาก็เป็นทายาทเพียงคนเดียวของตระกูล แต่เพราะเฝิงเส้าเฟิงใฝ่ฝันที่จะเข้าสู่วงการบันเทิง จึงได้เลือกที่จะเรียนการแสดงที่มหาวิทยาลัย shanghai theatre academy หลังจากเรียนจบก็มีโอกาสคลุกคลีทำงานอยู่ในวงการบันเทิงมากว่า 10 ปี กระทั่งประสบความสำเร็จอย่างทุกวันนี้ แถมได้ข่าวว่าเขากำลังอินเลิฟอยู่กับ \"หนีหนี\" นางเอกเรื่อง \"Flowers Of War\" หนังระดับรางวัลของผู้กำกับจางอี้โหมวอีกด้วยผลงานด้านภาพยนตร์ภาพยนตร์ละครโทรทัศน์"
- text: "ดึกแล้วคุณขา เป็นภาพยนตร์แนวใด"
context: "ดึกแล้วคุณขา ดึกแล้วคุณขา (Bangkok Time) เป็นภาพยนตร์ไทยที่กำกับโดยสันติ แต้พานิช ออกฉายในปี พ.ศ. 2550 โดยฉายแบบจำกัดโรงที่โรงภาพยนตร์ลิโด้ ดึกแล้วคุณขา เป็นภาพยนตร์รัก โรแมนติก มีเนื้อหาเกี่ยวกับรักสามเส้า ของชายสองหญิงหนึ่งที่เป็นคนทำงานกลางคืน คือ พ่อค้าขายของปลอมริมถนนสีลม (รับบทโดย อรรถพร ธีมากร) ชายหนุ่มบริการทางโทรศัพท์ (รับบทโดย อนันดา เอเวอริ่งแฮม) และนางพยาบาลที่ต้องเข้าเวรกะดึก (รับบทโดย ดุสิตา อนุชิตชาญชัย) ภาพยนตร์เรื่องนี้เข้าฉายใน Bangkok Film Festival 2007 และ Vancouver International Film Festival 2007 ในสาขา Dragon and Tiger Award"
- text: "แอนนา ฟาริส เกิดเมื่อไร"
context: "แอนนา ฟาริส แอนนา เคย์ ฟาริส (; เกิด 29 พฤศจิกายน ค.ศ. 1976) เป็นนักแสดงแะนักร้องชาวอเมริกัน ฟาริสปรากฏในภาพยนตร์ซีรีส์เรื่อง Scary Movie ในภาพยนตร์เรื่อง The Hot Chick (2002), Lost in Translation (2003), และ My Super Ex-Girlfriend (2006) ในปี 2008 เธอแสดงในเรื่อง The House Bunny ที่เธอได้รับการเสนอชื่อเข้าชิงรางวัลเอ็มทีวีมูวี่อวอร์ดส ผลงานภาพยนตร์หลัง ๆ ของเธออย่างเช่น Young Americans, Frequently Asked Questions About Time Travel, Yogi Bear, และ Observe and Report เธอยังพากย์เสียงให้กับภาพยนตร์แอนิเมชันเรื่อง Cloudy with a Chance of Meatballs และ"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchanberta-th-QA
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the thaiqa_squad dataset.
language:
- th
Code for fine-tune Model [github](https://github.com/KillM0nGerZ/WangchanBERTa-for-QuestionAnswering.git)
|
osanseviero/llama-alpaca-snake | 56d73fd4594b1aaa96869f0c8cc4792656b50402 | 2022-04-01T09:45:01.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"llama-leaderboard",
"model-index"
] | image-classification | false | osanseviero | null | osanseviero/llama-alpaca-snake | 77 | null | transformers | 5,152 | ---
tags:
- image-classification
- pytorch
- huggingpics
- llama-leaderboard
metrics:
- accuracy
model-index:
- name: llama-alpaca-snake
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7910447716712952
---
# llama-alpaca-snake
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### alpaca

#### llamas

#### snake
 |
jmarshall/rare-puppers | d3eba330bc27b280adc819729eebcbd1ddd32aad | 2022-04-20T14:45:23.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | jmarshall | null | jmarshall/rare-puppers | 77 | null | transformers | 5,153 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9402984976768494
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
Ahmed9275/ALL-2 | 609860d67be98a48e1073c55db9da3afc6004422 | 2022-04-28T02:07:25.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Ahmed9275 | null | Ahmed9275/ALL-2 | 77 | 1 | transformers | 5,154 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ALL-2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9855383038520813
---
# ALL-2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
firas-spanioli/beer-whisky-wine-detection | 1421666a41dafec2956a1d94e7b56083134fdbee | 2022-05-11T11:38:38.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | firas-spanioli | null | firas-spanioli/beer-whisky-wine-detection | 77 | null | transformers | 5,155 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: beer-whisky-wine-detection
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9701492786407471
---
# beer-whisky-wine-detection
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### beer

#### whisky

#### wine
 |
utkarshsaboo45/ClearlyDefinedLicenseSummarizer | 954497f0c6592506a9223a87160e9e38dc8cc95a | 2022-05-27T23:18:06.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | utkarshsaboo45 | null | utkarshsaboo45/ClearlyDefinedLicenseSummarizer | 77 | null | transformers | 5,156 | ---
license: mit
---
|
smc/electric_pole_type_classification | 14ee7ca59552f2118afe5e796dca834eb281eefd | 2022-05-23T20:11:01.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | smc | null | smc/electric_pole_type_classification | 77 | 1 | transformers | 5,157 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: electric_pole_type_classification
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8571428656578064
---
# electric_pole_type_classification
Classify your electric pole images R200, H, R300, Portico, Poste, Tripode |
Gods/discord_test | 787e73cc2b5e10ab663f407787042253828a4aa2 | 2022-06-27T13:26:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Gods | null | Gods/discord_test | 77 | null | transformers | 5,158 | Entry not found |
robingeibel/reformer-finetuned | 10f24ed84d07de39a3f4d9a30df5c386a367a87f | 2022-07-04T07:11:33.000Z | [
"pytorch",
"tensorboard",
"reformer",
"fill-mask",
"dataset:big_patent",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | robingeibel | null | robingeibel/reformer-finetuned | 77 | null | transformers | 5,159 | ---
tags:
- generated_from_trainer
datasets:
- big_patent
model-index:
- name: reformer-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reformer-finetuned
This model is a fine-tuned version of ["google/reformer-crime-and-punishment"](https://huggingface.co/google/reformer-crime-and-punishment) on the big_patent dataset, wikipedia, and arxiv.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0 | 1.0 | 29934 | 0.0000 |
| 0.0 | 2.0 | 59868 | 0.0000 |
| 0.0 | 3.0 | 89802 | 0.0000 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Aftabhussain/Tomato_Leaf_Classifier | d1b535f6a8f2c556e358236ab0dbe1c5087ff0e3 | 2021-09-13T04:14:44.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Aftabhussain | null | Aftabhussain/Tomato_Leaf_Classifier | 76 | null | transformers | 5,160 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Tomato_Leaf_Classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# Tomato_Leaf_Classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bacterial_spot

#### Healthy
 |
DeepPavlov/roberta-large-winogrande | 74b9e8bb2314ca9f440eb14de45fa3a29ae0f5dc | 2021-12-24T14:20:49.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:winogrande",
"arxiv:1907.11692",
"transformers"
] | text-classification | false | DeepPavlov | null | DeepPavlov/roberta-large-winogrande | 76 | null | transformers | 5,161 | ---
language:
- en
datasets:
- winogrande
widget:
- text: "The roof of Rachel's home is old and falling apart, while Betty's is new. The home value of </s> Rachel is lower."
- text: "The wooden doors at my friends work are worse than the wooden desks at my work, because the </s> desks material is cheaper."
- text: "Postal Service were to reduce delivery frequency. </s> The postal service could deliver less frequently."
- text: "I put the cake away in the refrigerator. It has a lot of butter in it. </s> The cake has a lot of butter in it."
---
# RoBERTa Large model fine-tuned on Winogrande
This model was fine-tuned on Winogrande dataset (XL size) in sequence classification task format, meaning that original pairs of sentences
with corresponding options filled in were separated, shuffled and classified independently of each other.
## Model description
## Intended use & limitations
### How to use
## Training data
[WinoGrande-XL](https://huggingface.co/datasets/winogrande) reformatted the following way:
1. Each sentence was split on "`_`" placeholder symbol.
2. Each option was concatenated with the second part of the split, thus transforming each example into two text segment pairs.
3. Text segment pairs corresponding to correct and incorrect options were marked with `True` and `False` labels accordingly.
4. Text segment pairs were shuffled thereafter.
For example,
```json
{
"answer": "2",
"option1": "plant",
"option2": "urn",
"sentence": "The plant took up too much room in the urn, because the _ was small."
}
```
becomes
```json
{
"sentence1": "The plant took up too much room in the urn, because the ",
"sentence2": "plant was small.",
"label": false
}
```
and
```json
{
"sentence1": "The plant took up too much room in the urn, because the ",
"sentence2": "urn was small.",
"label": true
}
```
These sentence pairs are then treated as independent examples.
### BibTeX entry and citation info
```bibtex
@article{sakaguchi2019winogrande,
title={WinoGrande: An Adversarial Winograd Schema Challenge at Scale},
author={Sakaguchi, Keisuke and Bras, Ronan Le and Bhagavatula, Chandra and Choi, Yejin},
journal={arXiv preprint arXiv:1907.10641},
year={2019}
}
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Emanuel/twitter-emotion-deberta-v3-base | c83e7be5802e7f2687eedff0786491ea6e8e43ea | 2022-07-13T12:37:26.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Emanuel | null | Emanuel/twitter-emotion-deberta-v3-base | 76 | null | transformers | 5,162 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: twitter-emotion-deberta-v3-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.937
---
# twitter-emotion-deberta-v3-base
This model is a fine-tuned version of [DeBERTa-v3](https://huggingface.co/microsoft/deberta-v3-base). It achieves the following results on the evaluation set:
- Loss: 0.1474
- Accuracy: 0.937
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 80
- eval_batch_size: 80
- lr_scheduler_type: linear
- num_epochs: 6.0
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.10.3 |
JorgeSarry/est5-summarize | 994786697eef7d23732b20f8875c0b7e52428baa | 2021-10-03T18:22:58.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"es",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | JorgeSarry | null | JorgeSarry/est5-summarize | 76 | null | transformers | 5,163 | ---
language: es
---
This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings trained on 60k Spanish MLSum for summarization.
You can use it with the command "summarize:"
|
LilaBoualili/bert-vanilla | 923ae4df2a59683465eb6520158ca0b27b6f02eb | 2021-05-18T21:27:42.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | LilaBoualili | null | LilaBoualili/bert-vanilla | 76 | null | transformers | 5,164 | At its core it uses a BERT-Base model (bert-base-uncased) fine-tuned on the MS MARCO passage classification task. It can be loaded using the TF/AutoModelForSequenceClassification classes.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking. |
akivo4ka/ruGPT3medium_psy | 8282334d7ddd903306ad1631ef43a59e5aecf776 | 2021-05-21T12:41:38.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | akivo4ka | null | akivo4ka/ruGPT3medium_psy | 76 | null | transformers | 5,165 | Entry not found |
asafaya/albert-xlarge-arabic | 3530341360b689496bee04eb743a2c37a7043a34 | 2022-02-11T13:52:49.000Z | [
"pytorch",
"tf",
"albert",
"fill-mask",
"ar",
"dataset:oscar",
"dataset:wikipedia",
"transformers",
"masked-lm",
"autotrain_compatible"
] | fill-mask | false | asafaya | null | asafaya/albert-xlarge-arabic | 76 | null | transformers | 5,166 | ---
language: ar
datasets:
- oscar
- wikipedia
tags:
- ar
- masked-lm
---
# Arabic-ALBERT Xlarge
Arabic edition of ALBERT Xlarge pretrained language model
_If you use any of these models in your work, please cite this work as:_
```
@software{ali_safaya_2020_4718724,
author = {Ali Safaya},
title = {Arabic-ALBERT},
month = aug,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.4718724},
url = {https://doi.org/10.5281/zenodo.4718724}
}
```
## Pretraining data
The models were pretrained on ~4.4 Billion words:
- Arabic version of [OSCAR](https://oscar-corpus.com/) (unshuffled version of the corpus) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- These models were trained using Google ALBERT's github [repository](https://github.com/google-research/albert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 7M training steps with batchsize of 64, instead of 125K with batchsize of 4096.
## Models
| | albert-base | albert-large | albert-xlarge |
|:---:|:---:|:---:|:---:|
| Hidden Layers | 12 | 24 | 24 |
| Attention heads | 12 | 16 | 32 |
| Hidden size | 768 | 1024 | 2048 |
## Results
For further details on the models performance or any other queries, please refer to [Arabic-ALBERT](https://github.com/KUIS-AI-Lab/Arabic-ALBERT/)
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
# loading the tokenizer
tokenizer = AutoTokenizer.from_pretrained("kuisailab/albert-xlarge-arabic")
# loading the model
model = AutoModelForMaskedLM.from_pretrained("kuisailab/albert-xlarge-arabic")
```
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting these models on their servers 😊
|
castorini/ance-dpr-question-multi | f977e296f80af577fc70728f5d45917e57f6ef3c | 2021-04-21T01:36:24.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"arxiv:2007.00808",
"transformers"
] | feature-extraction | false | castorini | null | castorini/ance-dpr-question-multi | 76 | null | transformers | 5,167 | This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf)
For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
|
chinhon/pegasus-newsroom-headline_writer | 2a18c38965b0016f5b451ebc0009b8d9b6815e39 | 2021-10-28T11:13:58.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chinhon | null | chinhon/pegasus-newsroom-headline_writer | 76 | 2 | transformers | 5,168 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-newsroom-headline_writer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-newsroom-headline_writer
This model is a fine-tuned version of [google/pegasus-newsroom](https://huggingface.co/google/pegasus-newsroom) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3988
- Rouge1: 41.8748
- Rouge2: 23.1947
- Rougel: 35.6263
- Rougelsum: 35.7355
- Gen Len: 34.1266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.5784 | 1.0 | 31200 | 1.4287 | 41.4257 | 22.9355 | 35.3299 | 35.4648 | 34.4677 |
| 1.3501 | 2.0 | 62400 | 1.3955 | 41.9119 | 23.1912 | 35.6698 | 35.7479 | 33.8672 |
| 1.2417 | 3.0 | 93600 | 1.3988 | 41.8748 | 23.1947 | 35.6263 | 35.7355 | 34.1266 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
dog/rare-puppers | a947e18c081a4a980f0e0b71dfba55a2ca482c31 | 2021-11-10T03:41:31.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | dog | null | dog/rare-puppers | 76 | null | transformers | 5,169 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8651685118675232
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### husky

#### samoyed

#### shibu inu
 |
eliwill/rare-puppers | 3d4139fb96a966fe8fb18664ff423204b59da83c | 2022-01-08T01:40:43.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | eliwill | null | eliwill/rare-puppers | 76 | null | transformers | 5,170 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.4895833432674408
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### algebra

#### arithmetic

#### calculus

#### geometry

#### trigonometry
 |
filipafcastro/beer_vs_wine | c82ba21b4808761433e14f5f8d894f66a184a0a4 | 2021-11-05T10:55:21.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | filipafcastro | null | filipafcastro/beer_vs_wine | 76 | null | transformers | 5,171 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: beer_vs_wine
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9777777791023254
---
# beer_vs_wine
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### beer

#### wine
 |
gaetangate/bart-large_genrl_lcquad2 | cee4f63bb6449eb54b64e677acbd17432d613685 | 2022-04-05T15:10:15.000Z | [
"pytorch",
"bart",
"text2text-generation",
"arxiv:2108.07337",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | gaetangate | null | gaetangate/bart-large_genrl_lcquad2 | 76 | null | transformers | 5,172 | ---
license: apache-2.0
---
This model is used in the paper **Generative Relation Linking for Question Answering over Knowledge Bases**. [ArXiv](https://arxiv.org/abs/2108.07337), [GitHub](https://github.com/IBM/kbqa-relation-linking)
## Citation
```bibtex
@inproceedings{rossiello-genrl-2021,
title={Generative relation linking for question answering over knowledge bases},
author={Rossiello, Gaetano and Mihindukulasooriya, Nandana and Abdelaziz, Ibrahim and Bornea, Mihaela and Gliozzo, Alfio and Naseem, Tahira and Kapanipathi, Pavan},
booktitle={International Semantic Web Conference},
pages={321--337},
year={2021},
organization={Springer},
url = "https://link.springer.com/chapter/10.1007/978-3-030-88361-4_19",
doi = "10.1007/978-3-030-88361-4_19"
}
``` |
gustavecortal/gpt-neo-2.7B-8bit | c4cdf336f9dbf6fd96d152927d52f64f59ca788f | 2022-03-04T10:33:18.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"dataset:The Pile",
"transformers",
"causal-lm",
"license:mit"
] | text-generation | false | gustavecortal | null | gustavecortal/gpt-neo-2.7B-8bit | 76 | 3 | transformers | 5,173 | ---
language: en
license: mit
tags:
- causal-lm
datasets:
- The Pile
---
### Quantized EleutherAI/gpt-neo-2.7B with 8-bit weights
This is a version of [EleutherAI's GPT-Neo](https://huggingface.co/EleutherAI/gpt-neo-2.7B) with 2.7 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**. Inspired by [GPT-J 8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit).
Here's how to run it: [](https://colab.research.google.com/drive/1lMja-CPc0vm5_-gXNXAWU-9c0nom7vZ9)
## Model Description
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.
## Links
* [EleutherAI](https://www.eleuther.ai)
* [Hivemind](https://training-transformers-together.github.io/)
* [Gustave Cortal](https://twitter.com/gustavecortal) |
huggingtweets/barackobama | 6214faa24a909765d32267eefd454765cc7d94fe | 2022-07-05T22:19:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/barackobama | 76 | null | transformers | 5,174 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1329647526807543809/2SGvnHYV_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Barack Obama</div>
<div style="text-align: center; font-size: 14px;">@barackobama</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Barack Obama.
| Data | Barack Obama |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 334 |
| Short tweets | 22 |
| Tweets kept | 2894 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9f9to7y9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @barackobama's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3xlun3ts) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3xlun3ts/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/barackobama')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mrm8488/vit-base-patch16-224_finetuned-pneumothorax | 2afcf5cc3699938505192630ae862faecc3b250b | 2021-09-15T08:43:39.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers"
] | image-classification | false | mrm8488 | null | mrm8488/vit-base-patch16-224_finetuned-pneumothorax | 76 | 1 | transformers | 5,175 | Entry not found |
nateraw/huggingpics-package-demo-2 | 5db84577de0048ff5aca0370ac3a40ef3461334f | 2021-11-09T21:00:52.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"generated_from_trainer",
"license:apache-2.0"
] | image-classification | false | nateraw | null | nateraw/huggingpics-package-demo-2 | 76 | null | transformers | 5,176 | ---
license: apache-2.0
tags:
- image-classification
- huggingpics
- generated_from_trainer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# huggingpics-package-demo-2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3761
- Acc: 0.9403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0328 | 1.0 | 24 | 0.9442 | 0.7463 |
| 0.8742 | 2.0 | 48 | 0.7099 | 0.9403 |
| 0.6451 | 3.0 | 72 | 0.5050 | 0.9403 |
| 0.508 | 4.0 | 96 | 0.3761 | 0.9403 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Tokenizers 0.10.3
|
nateraw/rare-puppers-new-auth | b0164e59c4986fea737a7abb22caf034d8a15492 | 2021-12-10T20:51:18.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | nateraw | null | nateraw/rare-puppers-new-auth | 76 | null | transformers | 5,177 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers-new-auth
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.89552241563797
---
# rare-puppers-new-auth
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
osanseviero/taco_or_what | 3d8b91acef7f4fa758c1e1b5ad2d424c915bba42 | 2021-07-03T18:39:20.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | osanseviero | null | osanseviero/taco_or_what | 76 | null | transformers | 5,178 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: taco_or_what
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5148809552192688
---
# taco_or_what
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### burrito

#### fajitas

#### kebab

#### quesadilla

#### taco
 |
speechbrain/asr-wav2vec2-transformer-aishell | 97ae797c492da87b5934c4990119f2c77ed1d411 | 2021-12-15T10:44:05.000Z | [
"en",
"dataset:aishell",
"arxiv:2106.04624",
"speechbrain",
"automatic-speech-recognition",
"CTC",
"Attention",
"Transformers",
"wav2vec2",
"pytorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-wav2vec2-transformer-aishell | 76 | 3 | speechbrain | 5,179 | ---
language: "en"
thumbnail:
tags:
- automatic-speech-recognition
- CTC
- Attention
- Transformers
- wav2vec2
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- aishell
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Transformer for AISHELL + wav2vec2 (Mandarin Chinese)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on AISHELL +wav2vec2 (Mandarin Chinese)
within SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Dev CER | Test CER | GPUs | Full Results |
|:-------------:|:--------------:|:--------------:|:--------:|:--------:|
| 05-03-21 | 5.19 | 5.58 | 2xV100 32GB | [Google Drive](https://drive.google.com/drive/folders/1zlTBib0XEwWeyhaXDXnkqtPsIBI18Uzs?usp=sharing)|
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of LibriSpeech.
- Acoustic model made of a wav2vec2 encoder and a joint decoder with CTC +
transformer. Hence, the decoding also incorporates the CTC probabilities.
To Train this system from scratch, [see our SpeechBrain recipe](https://github.com/speechbrain/speechbrain/tree/develop/recipes/AISHELL-1/ASR/transformer).
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in English)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-transformer-aishell", savedir="pretrained_models/asr-wav2vec2-transformer-aishell")
asr_model.transcribe_file("speechbrain/asr-wav2vec2-transformer-aishell/example_mandarin.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain (Commit hash: '480dde87').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/AISHELL-1/ASR/transformer/
python train.py hparams/train_ASR_transformer_with_wav2vect.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1P3w5BnwLDxMHFQrkCZ5RYBZ1WsQHKFZr?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
``` |
textattack/distilbert-base-uncased-MRPC | aeb733e21ed436844c6d9f9398a27c23f8e81be4 | 2020-07-06T16:30:12.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/distilbert-base-uncased-MRPC | 76 | null | transformers | 5,180 | ## TextAttack Model Card
This `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 2e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8578431372549019, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
lazyturtl/roomidentifier | 5669106b99d6e461909ac1147664b726b43a5caa | 2022-03-30T04:10:41.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | lazyturtl | null | lazyturtl/roomidentifier | 76 | null | transformers | 5,181 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: roomidentifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
# roomidentifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bathroom

#### Bedroom

#### DinningRoom

#### Kitchen

#### LivingRoom
 |
dimbyTa/rock-challenge-ViT-two-by-two | abad62db83cf90d89a350907e571836fddf9636d | 2022-04-20T11:19:22.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | dimbyTa | null | dimbyTa/rock-challenge-ViT-two-by-two | 76 | null | transformers | 5,182 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rock-challenge-ViT-two-by-two
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9663800001144409
---
# rock-challenge-ViT-two-by-two
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### fines

#### large

#### medium

#### pellets
 |
lazyturtl/blocks | c0280c6a366a0760cbe8052901c7399cd6bea738 | 2022-04-12T06:15:12.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | lazyturtl | null | lazyturtl/blocks | 76 | null | transformers | 5,183 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: blocks
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.4444444477558136
---
# blocks
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### blue color

#### cyan color

#### green color

#### orange color

#### red color

#### yellow color
 |
domluna/vit-base-patch16-224-in21k-shiba-inu-detector | 3017918814cd5d14745f1137114ab3ff14c0f1b5 | 2022-04-25T02:16:24.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | domluna | null | domluna/vit-base-patch16-224-in21k-shiba-inu-detector | 76 | 1 | transformers | 5,184 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-shiba-inu-detector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-shiba-inu-detector
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on dataset with 4 dog types including Shiba Inu.
It achieves the following results on the evaluation set:
- Loss: 0.6511
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.94 | 4 | 1.3875 | 0.1667 |
| No log | 1.94 | 8 | 1.2712 | 0.7833 |
| 1.4176 | 2.94 | 12 | 1.0972 | 0.9 |
| 1.4176 | 3.94 | 16 | 0.9365 | 0.95 |
| 1.0144 | 4.94 | 20 | 0.7836 | 0.9833 |
| 1.0144 | 5.94 | 24 | 0.6511 | 1.0 |
| 1.0144 | 6.94 | 28 | 0.5329 | 1.0 |
| 0.6329 | 7.94 | 32 | 0.4403 | 1.0 |
| 0.6329 | 8.94 | 36 | 0.3777 | 1.0 |
| 0.3821 | 9.94 | 40 | 0.3273 | 1.0 |
| 0.3821 | 10.94 | 44 | 0.2886 | 1.0 |
| 0.3821 | 11.94 | 48 | 0.2622 | 1.0 |
| 0.2655 | 12.94 | 52 | 0.2397 | 1.0 |
| 0.2655 | 13.94 | 56 | 0.2250 | 1.0 |
| 0.202 | 14.94 | 60 | 0.2152 | 1.0 |
| 0.202 | 15.94 | 64 | 0.2074 | 1.0 |
| 0.202 | 16.94 | 68 | 0.2003 | 1.0 |
| 0.1785 | 17.94 | 72 | 0.1960 | 1.0 |
| 0.1785 | 18.94 | 76 | 0.1936 | 1.0 |
| 0.1618 | 19.94 | 80 | 0.1930 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
plantdoctor/swin-tiny-patch4-window7-224-plant-doctor | 01b8844d54d553e95f86c175599b5100ed255183 | 2022-04-22T12:31:55.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | plantdoctor | null | plantdoctor/swin-tiny-patch4-window7-224-plant-doctor | 76 | null | transformers | 5,185 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-plant-doctor
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9982930298719772
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-plant-doctor
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0043
- Accuracy: 0.9983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0643 | 1.0 | 3954 | 0.0218 | 0.9933 |
| 0.0536 | 2.0 | 7908 | 0.0103 | 0.9966 |
| 0.018 | 3.0 | 11862 | 0.0043 | 0.9983 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu115
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Shitao/msmarco_doc_encoder | 2226e2fe8719ffce9563be0e61de6a48d77897e5 | 2022-04-24T17:13:20.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | Shitao | null | Shitao/msmarco_doc_encoder | 76 | null | transformers | 5,186 | ---
license: apache-2.0
---
|
karthiksv/vit-base-patch16-224-cifar10 | 3874803225193f8c386df2874d2dc3bb0f81284a | 2022-06-30T02:05:56.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:cifar10",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | karthiksv | null | karthiksv/vit-base-patch16-224-cifar10 | 76 | null | transformers | 5,187 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- cifar10
model-index:
- name: vit-base-patch16-224-cifar10
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: cifar10
type: cifar10
config: plain_text
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.1004
verified: true
- name: Precision Macro
type: precision
value: 0.07725693204097324
verified: true
- name: Precision Micro
type: precision
value: 0.1004
verified: true
- name: Precision Weighted
type: precision
value: 0.07725693204097323
verified: true
- name: Recall Macro
type: recall
value: 0.1004
verified: true
- name: Recall Micro
type: recall
value: 0.1004
verified: true
- name: Recall Weighted
type: recall
value: 0.1004
verified: true
- name: F1 Macro
type: f1
value: 0.07942008420616108
verified: true
- name: F1 Micro
type: f1
value: 0.1004
verified: true
- name: F1 Weighted
type: f1
value: 0.07942008420616108
verified: true
- name: loss
type: loss
value: 2.3154706954956055
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
matteopilotto/vit-base-patch16-224-in21k-snacks | 54e537f1b0706a837fac7718b047a3c6563deb22 | 2022-06-27T22:19:35.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:Matthijs/snacks",
"transformers",
"model-index"
] | image-classification | false | matteopilotto | null | matteopilotto/vit-base-patch16-224-in21k-snacks | 76 | 0 | transformers | 5,188 | ---
datasets:
- Matthijs/snacks
model-index:
- name: matteopilotto/vit-base-patch16-224-in21k-snacks
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: Matthijs/snacks
type: Matthijs/snacks
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.8928571428571429
verified: true
- name: Precision Macro
type: precision
value: 0.8990033704680036
verified: true
- name: Precision Micro
type: precision
value: 0.8928571428571429
verified: true
- name: Precision Weighted
type: precision
value: 0.8972398709051788
verified: true
- name: Recall Macro
type: recall
value: 0.8914608843537415
verified: true
- name: Recall Micro
type: recall
value: 0.8928571428571429
verified: true
- name: Recall Weighted
type: recall
value: 0.8928571428571429
verified: true
- name: F1 Macro
type: f1
value: 0.892544821273258
verified: true
- name: F1 Micro
type: f1
value: 0.8928571428571429
verified: true
- name: F1 Weighted
type: f1
value: 0.8924168605019522
verified: true
- name: loss
type: loss
value: 0.479541540145874
verified: true
---
# Vision Transformer fine-tuned on `Matthijs/snacks` dataset
Vision Transformer (ViT) model pre-trained on ImageNet-21k and fine-tuned on [**Matthijs/snacks**](https://huggingface.co/datasets/Matthijs/snacks) for 5 epochs using various data augmentation transformations from `torchvision`.
The model achieves a **94.97%** and **94.43%** accuracy on the validation and test set, respectively.
## Data augmentation pipeline
The code block below shows the various transformations applied during pre-processing to augment the original dataset.
The augmented images where generated on-the-fly with the `set_transform` method.
```python
from transformers import ViTFeatureExtractor
from torchvision.transforms import (
Compose,
Normalize,
Resize,
RandomResizedCrop,
RandomHorizontalFlip,
RandomAdjustSharpness,
ToTensor
)
checkpoint = 'google/vit-base-patch16-224-in21k'
feature_extractor = ViTFeatureExtractor.from_pretrained(checkpoint)
# transformations on the training set
train_aug_transforms = Compose([
RandomResizedCrop(size=feature_extractor.size),
RandomHorizontalFlip(p=0.5),
RandomAdjustSharpness(sharpness_factor=5, p=0.5),
ToTensor(),
Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std),
])
# transformations on the validation/test set
valid_aug_transforms = Compose([
Resize(size=(feature_extractor.size, feature_extractor.size)),
ToTensor(),
Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std),
])
``` |
smc/electric_2 | de004ab1cf73d68cf0b265f387fbbe0e6eb2e205 | 2022-05-21T14:38:26.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | smc | null | smc/electric_2 | 76 | null | transformers | 5,189 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: electric pole classification
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
Find whether an electric pole has a transformer or not
|
smc/PANDA_ViT | af0e45d80b2cba0c8f41b91275f8f5dc18dbfb8e | 2022-05-23T21:41:42.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers",
"model-index"
] | image-classification | false | smc | null | smc/PANDA_ViT | 76 | 1 | transformers | 5,190 | ---
tags:
- image-classification
- pytorch
metrics:
- accuracy
- Cohen's Kappa
model-index:
- name: PANDA_ViT
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.47959184646606445
- name: Quadratic Cohen's Kappa
type: Quadratic Cohen's Kappa
value: 0.5590880513191223
---
# PANDA_ViT
An attempt to use a ViT for medical image classification (ISUP grading in prostate histopathology images). Currently uses a tiled and concatenated WSI as input
Example Images (1152,1152,3) 36 WSI patches:
ISUP 0:
<img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/0c02d3bb3a62519b31c63d0301c6843e_0.jpeg">
ISUP 1:
<img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/0cee71ab57422e04f76e09ef2186fcd5_1.jpeg">
ISUP 2:
<img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/00bbc1482301d16de3ff63238cfd0b34_2.jpeg">
ISUP 3:
<img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/0c5c2d16c0f2e399b7be641e7e7f66d9_3.jpeg">
ISUP 4:
<img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/0c88d7c7033e2048b1068e208b105270_4.jpeg">
ISUP 5:
<img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/00c15b23b30a5ba061358d9641118904_5.jpeg"> |
MSaudTahir/wav2vec2-large-xls-r-300m-urdu-proj | 390172a68e39e1c18d658c53d3fbee6597372759 | 2022-06-02T16:12:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | MSaudTahir | null | MSaudTahir/wav2vec2-large-xls-r-300m-urdu-proj | 76 | null | transformers | 5,191 | Entry not found |
Gadmz/censor-testing-performance | 2e32fd073d97437af0fe8969c500454db38485bc | 2022-07-01T08:14:00.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Gadmz | null | Gadmz/censor-testing-performance | 76 | null | transformers | 5,192 | Entry not found |
neulab/gpt2-med-finetuned-wikitext103 | 91221bfd6dda5d4d17e7855e4883398d135cf28f | 2022-07-14T15:38:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:2201.12431",
"transformers"
] | text-generation | false | neulab | null | neulab/gpt2-med-finetuned-wikitext103 | 76 | null | transformers | 5,193 | This is a `gpt2-medium` model, finetuned on the Wikitext-103 dataset.
It achieves a perplexity of **11.55** using a "sliding window" context, using the `run_clm.py` script at [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers).
| Base LM: | `distilgpt2` | `gpt2` |
| :--- | ----: | ---: |
| base perplexity | 18.25 | 14.84 |
| + kNN-LM | 15.03 | 12.57 |
| + RetoMaton | **14.70** | **12.46** |
This model was released as part of the paper ["Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval"](https://arxiv.org/pdf/2201.12431.pdf) (ICML'2022).
For more information, see: [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers)
If you use this model, please cite:
```
@inproceedings{alon2022neuro,
title={Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval},
author={Alon, Uri and Xu, Frank and He, Junxian and Sengupta, Sudipta and Roth, Dan and Neubig, Graham},
booktitle={International Conference on Machine Learning},
pages={468--485},
year={2022},
organization={PMLR}
}
``` |
FinanceInc/finbert-pretrain | e02aba19bb507a85dde57ab60972de6e35f1efff | 2022-07-27T20:43:33.000Z | [
"pytorch",
"bert",
"fill-mask",
"unk",
"arxiv:2006.08097",
"transformers",
"autotrain",
"pre-trained",
"finbert",
"autotrain_compatible"
] | fill-mask | false | FinanceInc | null | FinanceInc/finbert-pretrain | 76 | null | transformers | 5,194 | ---
tags:
- autotrain
- pre-trained
- finbert
- fill-mask
language: unk
widget:
- text: Tesla remains one of the highest [MASK] stocks on the market. Meanwhile, Aurora Innovation is a pre-revenue upstart that shows promise.
- text: Asian stocks [MASK] from a one-year low on Wednesday as U.S. share futures and oil recovered from the previous day's selloff, but uncertainty over the impact of the Omicron
- text: U.S. stocks were set to rise on Monday, led by [MASK] in Apple which neared $3 trillion in market capitalization, while investors braced for a Federal Reserve meeting later this week.
---
`FinBERT` is a BERT model pre-trained on financial communication text. The purpose is to enhance financial NLP research and practice.
### Pre-training
It is trained on the following three financial communication corpus. The total corpora size is 4.9B tokens.
- Corporate Reports 10-K & 10-Q: 2.5B tokens
- Earnings Call Transcripts: 1.3B tokens
- Analyst Reports: 1.1B tokens
The entire training is done using an **NVIDIA DGX-1** machine. The server has 4 Tesla P100 GPUs, providing a total of 128 GB of GPU memory. This machine enables us to train the BERT models using a batch size of 128. We utilize Horovord framework for multi-GPU training. Overall, the total time taken to perform pretraining for one model is approximately **2 days**.
More details on `FinBERT`'s pre-training process can be found at: https://arxiv.org/abs/2006.08097
`FinBERT` can be further fine-tuned on downstream tasks. Specifically, we have fine-tuned `FinBERT` on an analyst sentiment classification task, and the fine-tuned model is shared at [https://huggingface.co/demo-org/auditor_review_model](https://huggingface.co/demo-org/auditor_review_model)
### Usage
Load the model directly from Transformers:
```
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("demo-org/finbert-pretrain", use_auth_token=True)
```
### Questions
Please contact the Data Science COE if you have more questions about this pre-trained model
### Demo Model
This model card is for demo purposes. The original model card for this model is [https://huggingface.co/yiyanghkust/finbert-pretrain](https://huggingface.co/yiyanghkust/finbert-pretrain). |
GKLMIP/bert-myanmar-base-uncased | f3d5ba16658848bcc4d202fbe0ae47b487a5032b | 2021-10-11T04:58:59.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | GKLMIP | null | GKLMIP/bert-myanmar-base-uncased | 75 | null | transformers | 5,195 | The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Huang, Xiuwen
and Cai, Xiaonan
and Lin, Nankai",
title="Pre-trained Models and Evaluation Data for the Myanmar Language",
booktitle="The 28th International Conference on Neural Information Processing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` |
GroNLP/bert-base-dutch-cased-upos-alpino | 98205e755bc716e04a7ff642441b3d1c6427b2cc | 2021-05-18T20:24:46.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"nl",
"arxiv:2105.02855",
"transformers",
"BERTje",
"pos",
"autotrain_compatible"
] | token-classification | false | GroNLP | null | GroNLP/bert-base-dutch-cased-upos-alpino | 75 | null | transformers | 5,196 | ---
language: nl
tags:
- BERTje
- pos
---
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
|
Helsinki-NLP/opus-mt-de-ms | 74314972252fa3c9de984265d92fb00269ae65e3 | 2021-01-18T08:01:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"ms",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-ms | 75 | null | transformers | 5,197 | ---
language:
- de
- ms
tags:
- translation
license: apache-2.0
---
### deu-msa
* source group: German
* target group: Malay (macrolanguage)
* OPUS readme: [deu-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-msa/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): ind zsm_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.msa | 34.0 | 0.607 |
### System Info:
- hf_name: deu-msa
- source_languages: deu
- target_languages: msa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-msa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'ms']
- src_constituents: {'deu'}
- tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.test.txt
- src_alpha3: deu
- tgt_alpha3: msa
- short_pair: de-ms
- chrF2_score: 0.607
- bleu: 34.0
- brevity_penalty: 0.9540000000000001
- ref_len: 3729.0
- src_name: German
- tgt_name: Malay (macrolanguage)
- train_date: 2020-06-17
- src_alpha2: de
- tgt_alpha2: ms
- prefer_old: False
- long_pair: deu-msa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-vi-ru | 0a51430ce05344bff0300a65e896536de0f52cbb | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"vi",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-vi-ru | 75 | null | transformers | 5,198 | ---
language:
- vi
- ru
tags:
- translation
license: apache-2.0
---
### vie-rus
* source group: Vietnamese
* target group: Russian
* OPUS readme: [vie-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-rus/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.rus | 16.9 | 0.331 |
### System Info:
- hf_name: vie-rus
- source_languages: vie
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'ru']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: rus
- short_pair: vi-ru
- chrF2_score: 0.331
- bleu: 16.9
- brevity_penalty: 0.878
- ref_len: 2207.0
- src_name: Vietnamese
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: ru
- prefer_old: False
- long_pair: vie-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
cardiffnlp/bertweet-base-hate | 5d8e3b7a6ac951864f36757914847bf59c4306f7 | 2021-05-20T14:46:38.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/bertweet-base-hate | 75 | null | transformers | 5,199 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.