repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ukr-models/uk-ner-quantized | ukr-models | null | 7 | 0 | null | 1 | null | true | false | false | mit | ['uk'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['ukrainian'] | false | true | true | 952 | false | ## Model Description
Quantized version [uk-ner model](https://huggingface.co/ukr-models/uk-ner). Returns B-PER, I-PER, B-LOC, I-LOC, B-ORG, I-ORG tags
## How to Use
After cloning the repository, please use the following code (download script get_predictions.py from the repository, it uses [package tokenize_uk](https://pypi.org/project/tokenize_uk/) for splitting)
```py
from transformers import AutoTokenizer
import torch
from get_predictions import get_word_predictions
tokenizer = AutoTokenizer.from_pretrained("./")
model = torch.load("./pytorch_model.bin")
labels_list = ['O','B-PER','I-PER','B-ORG','I-ORG','B-LOC','I-LOC']
texts = ["Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера."]
get_word_predictions(model, tokenizer, texts, labels_list)
```
| 87e71d610c318151e46990e61186ba42 |
Scrya/whisper-small-ms | Scrya | whisper | 23 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ms'] | ['google/fleurs'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'incomplete', 'generated_from_trainer'] | true | true | true | 1,271 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small MS - FLEURS
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the FLEURS dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3324
- eval_wer: 15.6453
- eval_runtime: 347.6066
- eval_samples_per_second: 2.155
- eval_steps_per_second: 0.27
- epoch: 10.75
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| 1942d0e6592b1e53743ba7b6f1698bf1 |
jonatasgrosman/wav2vec2-large-xlsr-53-italian | jonatasgrosman | wav2vec2 | 24 | 1,752 | transformers | 7 | automatic-speech-recognition | true | false | true | apache-2.0 | ['it'] | ['common_voice', 'mozilla-foundation/common_voice_6_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard', 'it', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 4,288 | false |
# Fine-tuned XLSR-53 large model for speech recognition in Italian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Italian using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-italian")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "it"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-italian"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| POI LEI MORÌ. | POI LEI MORÌ |
| IL LIBRO HA SUSCITATO MOLTE POLEMICHE A CAUSA DEI SUOI CONTENUTI. | IL LIBRO HA SUSCITATO MOLTE POLEMICHE A CAUSA DEI SUOI CONTENUTI |
| "FIN DALL'INIZIO LA SEDE EPISCOPALE È STATA IMMEDIATAMENTE SOGGETTA ALLA SANTA SEDE." | FIN DALL'INIZIO LA SEDE EPISCOPALE È STATA IMMEDIATAMENTE SOGGETTA ALLA SANTA SEDE |
| IL VUOTO ASSOLUTO? | IL VUOTO ASSOLUTO |
| DOPO ALCUNI ANNI, EGLI DECISE DI TORNARE IN INDIA PER RACCOGLIERE ALTRI INSEGNAMENTI. | DOPO ALCUNI ANNI EGLI DECISE DI TORNARE IN INDIA PER RACCOGLIERE ALTRI INSEGNAMENTI |
| SALVATION SUE | SALVATION SOO |
| IN QUESTO MODO, DECIO OTTENNE IL POTERE IMPERIALE. | IN QUESTO MODO DECHO OTTENNE IL POTERE IMPERIALE |
| SPARTA NOVARA ACQUISISCE IL TITOLO SPORTIVO PER GIOCARE IN PRIMA CATEGORIA. | PARCANOVARACFILISCE IL TITOLO SPORTIVO PER GIOCARE IN PRIMA CATEGORIA |
| IN SEGUITO, KYGO E SHEAR HANNO PROPOSTO DI CONTINUARE A LAVORARE SULLA CANZONE. | IN SEGUITO KIGO E SHIAR HANNO PROPOSTO DI CONTINUARE A LAVORARE SULLA CANZONE |
| ALAN CLARKE | ALAN CLARK |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-italian --dataset mozilla-foundation/common_voice_6_0 --config it --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-italian --dataset speech-recognition-community-v2/dev_data --config it --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-italian,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {I}talian},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian}},
year={2021}
}
```
| 5e41217800b462325a7261d1e2514fc5 |
rishabhjain16/whisper_medium_en_to_myst55h | rishabhjain16 | whisper | 25 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,724 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-medium.en
This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5812
- Wer: 11.9424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3378 | 1.01 | 500 | 0.3505 | 12.4476 |
| 0.1075 | 2.02 | 1000 | 0.3609 | 11.8145 |
| 0.041 | 3.03 | 1500 | 0.4061 | 11.6578 |
| 0.0256 | 4.04 | 2000 | 0.4597 | 11.8066 |
| 0.0099 | 5.06 | 2500 | 0.5020 | 12.0964 |
| 0.0028 | 6.07 | 3000 | 0.5476 | 11.9816 |
| 0.0015 | 7.08 | 3500 | 0.5869 | 11.7100 |
| 0.0018 | 8.09 | 4000 | 0.5812 | 11.9424 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
| 7c2695c30768855db51a270e8968648c |
allenai/macaw-3b | allenai | t5 | 8 | 252 | transformers | 1 | text2text-generation | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 920 | false |
# macaw-3b
## Model description
Macaw (<b>M</b>ulti-<b>a</b>ngle <b>c</b>(q)uestion <b>a</b>ns<b>w</b>ering) is a ready-to-use model capable of
general question answering,
showing robustness outside the domains it was trained on. It has been trained in "multi-angle" fashion,
which means it can handle a flexible set of input and output "slots"
(question, answer, multiple-choice options, context, and explanation) .
Macaw was built on top of [T5](https://github.com/google-research/text-to-text-transfer-transformer) and comes in
three sizes: [macaw-11b](https://huggingface.co/allenai/macaw-11b), [macaw-3b](https://huggingface.co/allenai/macaw-3b),
and [macaw-large](https://huggingface.co/allenai/macaw-large), as well as an answer-focused version featured on
various leaderboards [macaw-answer-11b](https://huggingface.co/allenai/macaw-answer-11b).
See https://github.com/allenai/macaw for more details. | 6b76c38d952abffd5887e2da063051a7 |
google/switch-large-128 | google | switch_transformers | 15 | 121 | transformers | 4 | text2text-generation | true | false | false | apache-2.0 | ['en'] | ['c4'] | null | 5 | 1 | 4 | 0 | 2 | 0 | 2 | ['text2text-generation'] | false | true | true | 8,030 | false |
# Model Card for Switch Transformers Large - 128 experts

# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
Switch Transformers is a Mixture of Experts (MoE) model trained on Masked Language Modeling (MLM) task. The model architecture is similar to the classic T5, but with the Feed Forward layers replaced by the Sparse MLP layers containing "experts" MLP. According to the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model enables faster training (scaling properties) while being better than T5 on fine-tuned tasks.
As mentioned in the first few lines of the abstract :
> we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus”, and achieve a 4x speedup over the T5-XXL model.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [original paper](https://arxiv.org/pdf/2101.03961.pdf).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All Switch Transformers Checkpoints](https://huggingface.co/models?search=switch)
- **Original Checkpoints:** [All Original Switch Transformers Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#mixture-of-experts-moe-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2101.03961.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face Switch Transformers Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/switch_transformers)
# Usage
Note that these checkpoints has been trained on Masked-Language Modeling (MLM) task. Therefore the checkpoints are not "ready-to-use" for downstream tasks. You may want to check `FLAN-T5` for running fine-tuned weights or fine-tune your own MoE following [this notebook](https://colab.research.google.com/drive/1aGGVHZmtKmcNBbAwa9hbu58DDpIuB5O4?usp=sharing)
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-large-128")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-large-128")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-large-128")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-large-128", device_map="auto")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-large-128")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-large-128", device_map="auto", torch_dtype=torch.float16)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-large-128")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-large-128", device_map="auto")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
# Uses
## Direct Use and Downstream Use
See the [research paper](https://arxiv.org/pdf/2101.03961.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Ethical considerations and risks
More information needed.
## Known Limitations
More information needed.
## Sensitive Use:
More information needed.
# Training Details
## Training Data
The model was trained on a Masked Language Modeling task, on Colossal Clean Crawled Corpus (C4) dataset, following the same procedure as `T5`.
## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks and compared the results against T5. See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2101.03961.pdf).
## Results
For full results for Switch Transformers, see the [research paper](https://arxiv.org/pdf/2101.03961.pdf), Table 5.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2101.03961,
doi = {10.48550/ARXIV.2101.03961},
url = {https://arxiv.org/abs/2101.03961},
author = {Fedus, William and Zoph, Barret and Shazeer, Noam},
keywords = {Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` | e8db525ce21936b207699ff65ef828d7 |
IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese | IDEA-CCNL | bert | 7 | 203 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['classification', 'zero-shot'] | false | true | true | 9,553 | false |
# Erlangshen-UniMC-RoBERTa-330M-Chinese
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc/)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
- API: [Fengshen-OpenAPI](https://fengshenbang-lm.com/open-api)
## 简介 Brief Introduction
UniMC 核心思想是将自然语言理解任务转化为 multiple choice 任务,并且使用多个 NLU 任务来进行预训练。我们在英文数据集实验结果表明仅含有 2.35 亿参数的 [ALBERT模型](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-Albert-235M-English)的zero-shot性能可以超越众多千亿的模型。并在中文测评基准 FewCLUE 和 ZeroCLUE 两个榜单中,13亿的[二郎神](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese)获得了第一的成绩。
The core idea of UniMC is to convert natural language understanding tasks into multiple choice tasks and use multiple NLU tasks for pre-training. Our experimental results on the English dataset show that the zero-shot performance of a [ALBERT](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-Albert-235M-English) model with only 235 million parameters can surpass that of many hundreds of billions of models. And in the Chinese evaluation benchmarks FewCLUE and ZeroCLUE two lists, 1.3 billion [Erlangshen](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) won the first result.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | RoBERTa | 330M | Chinese |
## 模型信息 Model Information
我们为零样本学习者提出了一种与输入无关的新范式,从某种意义上说,它与任何格式兼容并适用于一系列语言任务,例如文本分类、常识推理、共指解析、情感分析。我们的方法将零样本学习转化为多项选择任务,避免常用的大型生成模型(如 FLAN)中的问题。它不仅增加了模型的泛化能力,而且显着减少了对参数的需求。我们证明了这种方法可以在通用语言基准上取得最先进的性能,并在自然语言推理和文本分类等任务上产生令人满意的结果。更多详细信息可以参考我们的[论文](https://arxiv.org/abs/2210.08590)或者[GitHub](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc/)
We propose an new paradigm for zero-shot learners that is input-agnostic, in the sense that it is compatible with any format and applicable to a list of language tasks, such as text classification, commonsense reasoning, coreference resolution, sentiment analysis.
Our approach converts zero-shot learning into multiple choice tasks,
avoiding problems in commonly used large generative models such as FLAN. It not only adds generalization ability to the models, but also reduces the needs of parameters significantly. We demonstrate that this approach leads to state-of-the-art performance on common language benchmarks, and produces satisfactory results on tasks such as natural language inference and text classification. For more details, please refer to our [paper](https://arxiv.org/abs/2210.08590) or [github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc/)
### 下游效果 Performance
**Few-shot**
| Model | eprstmt | csldcp | tnews | iflytek | ocnli | bustm | chid | csl | wsc | Avg |
|------------|------------|----------|-----------|----------|-----------|-----------|-----------|----------|-----------|-----------|
| [FineTuning](https://arxiv.org/pdf/2107.07498.pdf)-RoBERTa-110M | 65.4 | 35.5 | 49 | 32.8 | 33 | 60.7 | 14.9 | 50 | 55.6 | 44.1 |
| [FineTuning](https://arxiv.org/pdf/2107.07498.pdf)-ERNIE1.0-110M | 66.5 | 57 | 516 | 42.1 | 32 | 60.4 | 15 | 60.1 | 50.3 | 48.34 |
| [PET](https://arxiv.org/pdf/2107.07498.pdf)-ERNIE1.0-110M | 84 | 59.9 | 56.4 | 50.3 | 38.1 | 58.4 | 40.6 | 61.1 | 58.7 | 56.39 |
| [P-tuning](https://arxiv.org/pdf/2107.07498.pdf)-ERNIE1.0-110M | 80.6 | 56.6 | 55.9 | 52.6 | 35.7 | 60.8 | 39.61 | 51.8 | 55.7 | 54.37 |
| [EFL](https://arxiv.org/pdf/2107.07498.pdf)-ERNIE1.0-110M | 76.7 | 47.9 | 56.3 | 52.1 | 48.7 | 54.6 | 30.3 | 52.8 | 52.3 | 52.7 |
| [UniMC-RoBERTa-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-110M-Chinese) | 88.64 | 54.08 | 54.32 | 48.6 | 66.55 | 73.76 | 67.71 | 52.54 | 59.92 | 62.86 |
| [UniMC-RoBERTa-330M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese) | 89.53 | 57.3 | 54.25 | 50 | 70.59 | 77.49 | 78.09 | 55.73 | 65.16 | 66.46 |
| [UniMC-MegatronBERT-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) | **89.278** | **60.9** | **57.46** | 52.89 | **76.33** | **80.37** | **90.33** | 61.73 | **79.15** | **72.05** |
**Zero-shot**
| Model | eprstmt | csldcp | tnews | iflytek | ocnli | bustm | chid | csl | wsc | Avg |
|---------------|-----------|-----------|-----------|-----------|-----------|----------|----------|----------|-----------|-----------|
| [GPT](https://arxiv.org/pdf/2107.07498.pdf)-110M | 57.5 | 26.2 | 37 | 19 | 34.4 | 50 | 65.6 | 50.1 | 50.3 | 43.4 |
| [PET](https://arxiv.org/pdf/2107.07498.pdf)-RoBERTa-110M | 85.2 | 12.6 | 26.1 | 26.6 | 40.3 | 50.6 | 57.6 | 52.2 | 54.7 | 45.1 |
| [NSP-BERT](https://arxiv.org/abs/2109.03564)-110M | 86.9 | 47.6 | 51 | 41.6 | 37.4 | 63.4 | 52 | **64.4** | 59.4 | 55.96 |
| [ZeroPrompt](https://arxiv.org/abs/2201.06910)-T5-1.5B | - | - | - | 16.14 | 46.16 | - | - | - | 47.98 | - |
| [Yuan1.0-13B](https://arxiv.org/abs/2110.04725) | 88.13 | 38.99 | 57.47 | 38.82 | 48.13 | 59.38 | 86.14 | 50 | 38.99 | 56.22 |
| [ERNIE3.0-240B](https://arxiv.org/abs/2107.02137) | 88.75 | **50.97** | **57.83** | **40.42** | 53.57 | 64.38 | 87.13 | 56.25 | 53.46 | 61.41 |
| [UniMC-RoBERTa-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-110M-Chinese) | 86.16 | 31.26 | 46.61 | 26.54 | 66.91 | 73.34 | 66.68 | 50.09 | 53.66 | 55.7 |
| [UniMC-RoBERTa-330M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese) | 87.5 | 30.4 | 47.6 | 31.5 | 69.9 | 75.9 | 78.17 | 49.5 | 60.55 | 59.01 |
| [UniMC-MegatronBERT-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) | **88.79** | 42.06 | 55.21 | 33.93 | **75.57** | **79.5** | **89.4** | 50.25 | **66.67** | **64.53** |
**Full dataset**
| Model | AFQMC | TNEWS1.1 | IFLYTEK | OCNLI | CMNLI | WSC1.1 | CSL | CHID | C3 |
|--------------------------------------------|-------|----------|---------|-------|-------|--------|-------|-------|-------|
| RoBERTa-Base | 74.06 | 57.5 | 60.36 | 74.3 | 79.73 | 83.48 | 85.37 | - | - |
| RoBERTa-Large | 74.88 | 58.79 | 61.52 | 77.7 | 81.4 | 89.14 | 86 | - | - |
| [Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B) 「Finetuning」 | 76.08 | 59.38 | 62.34 | 79.14 | 81 | 92.43 | 87.2 | 84.65 | 86.77 |
| [Erlangshen-UniMC-MegatronBERT-1.3B-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) | 77.09 | 60.4 | 62.67 | 83.05 | 84.76 | 93.74 | 87.67 | 85.93 | 86.54 |
## 使用 Usage
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
cd Fengshenbang-LM
pip install --editable .
```
```python3
import argparse
from fengshen.pipelines.multiplechoice import UniMCPipelines
total_parser = argparse.ArgumentParser("TASK NAME")
total_parser = UniMCPipelines.piplines_args(total_parser)
args = total_parser.parse_args()
pretrained_model_path = 'IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese'
args.learning_rate=2e-5
args.max_length=512
args.max_epochs=3
args.batchsize=8
args.default_root_dir='./'
model = UniMCPipelines(args,pretrained_model_path)
train_data = []
dev_data = []
test_data = [
{"texta": "放弃了途观L和荣威RX5,果断入手这部车,外观霸气又好开",
"textb": "",
"question": "下面新闻属于哪一个类别?",
"choice": [
"房产",
"汽车",
"教育",
"科技"
],
"answer": "汽车",
"label": 1,
"id": 7759}
]
if args.train:
model.train(train_data, dev_data)
result = model.predict(test_data)
for line in result[:20]:
print(line)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2210.08590):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2210.08590):
```text
@article{unimc,
author = {Ping Yang and
Junjie Wang and
Ruyi Gan and
Xinyu Zhu and
Lin Zhang and
Ziwei Wu and
Xinyu Gao and
Jiaxing Zhang and
Tetsuya Sakai},
title = {Zero-Shot Learners for Natural Language Understanding via a Unified Multiple Choice Perspective},
journal = {CoRR},
volume = {abs/2210.08590},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | 8e1c75a175dfab3584feb8cde3f3ef02 |
jonatasgrosman/exp_w2v2t_zh-cn_vp-it_s607 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['zh-CN'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'zh-CN'] | false | true | true | 475 | false | # exp_w2v2t_zh-cn_vp-it_s607
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 86b32013f06a2893dbcbcacf4f7be6a9 |
deepiit98/Cardinal__Catholicism_-clustered | deepiit98 | distilbert | 8 | 10 | transformers | 0 | question-answering | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,871 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# deepiit98/Cardinal__Catholicism_-clustered
This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3258
- Train End Logits Accuracy: 0.9167
- Train Start Logits Accuracy: 0.9236
- Validation Loss: 0.0995
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3258 | 0.9167 | 0.9236 | 0.0995 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| 27b04cd67e100cccab93a7b459b8ec9f |
ranieri-unimi/test-trainer | ranieri-unimi | roberta | 6 | 48 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 950 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [ShreyaR/finetuned-roberta-depression](https://huggingface.co/ShreyaR/finetuned-roberta-depression) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 137283aab24aa7efca9bfbcd23744f60 |
Helsinki-NLP/opus-mt-zh-he | Helsinki-NLP | marian | 11 | 13 | transformers | 0 | translation | true | true | false | apache-2.0 | ['zh', 'he'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,530 | false |
### zho-heb
* source group: Chinese
* target group: Hebrew
* OPUS readme: [zho-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-heb/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Hira cmn_Kana cmn_Latn cmn_Yiii lzh lzh_Bopo lzh_Hang lzh_Hani lzh_Hira lzh_Kana lzh_Yiii
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.heb | 28.5 | 0.469 |
### System Info:
- hf_name: zho-heb
- source_languages: zho
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'he']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: heb
- short_pair: zh-he
- chrF2_score: 0.469
- bleu: 28.5
- brevity_penalty: 0.986
- ref_len: 3654.0
- src_name: Chinese
- tgt_name: Hebrew
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: he
- prefer_old: False
- long_pair: zho-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 92e2df6af0ada61cdb665dd595723dce |
sd-concepts-library/degods | sd-concepts-library | null | 9 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 988 | false | ### DeGods on Stable Diffusion
This is the `<degods>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




| 7dc2ee6e52face3e8b4d4ffea3e25aab |
ProGamerGov/knollingcase-embeddings-sd-v2-0 | ProGamerGov | null | 10 | 0 | null | 117 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 3 | 3 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 2,538 | false |
The embeddings in this repository were trained for the 768px [Stable Diffusion v2.0](https://huggingface.co/stabilityai/stable-diffusion-2) model. The embeddings should work on any model that uses SD v2.0 as a base.
Currently the kc32-v4-5000.pt & kc16-v4-5000.pt embeddings seem to perform the best.
**Knollingcase v1**
The v1 embeddings were trained for 4000 iterations with a batch size of 2, a text dropout of 10%, & 16 vectors using Automatic1111's WebUI. A total of 69 training images with high quality captions were used.
**Knollingcase v2**
The v2 embeddings were trained for 5000 iterations with a batch size of 4 and a text dropout of 10%, & 16 vectors using Automatic1111's WebUI. A total of 78 training images with high quality captions were used.
**Knollingcase v3**
The v3 embeddings were trained for 4000-6250 iterations with a batch size of 4 and a text dropout of 10%, & 16 vectors using Automatic1111's WebUI. A total of 86 training images with high quality captions were used.
<div align="center">
<img src="https://huggingface.co/ProGamerGov/knollingcase-embeddings-sd-v2-0/resolve/main/cruise_ship_on_wave_kc16-v3-6250.png">
</div>
* [Full Image](https://huggingface.co/ProGamerGov/knollingcase-embeddings-sd-v2-0/resolve/main/cruise_ship_on_wave_kc16-v3-6250.png)
**Knollingcase v4**
The v4 embeddings were trained for 4000-6250 iterations with a batch size of 4 and a text dropout of 10%, using Automatic1111's WebUI. A total of 116 training images with high quality captions were used.
<div align="center">
<img src="https://huggingface.co/ProGamerGov/knollingcase-embeddings-sd-v2-0/resolve/main/v4_size_768_t4x11.jpg">
</div>
* [Full Image](https://huggingface.co/ProGamerGov/knollingcase-embeddings-sd-v2-0/resolve/main/v4_size_768_t4x11.jpg)
**Usage**
To use the embeddings, download and then rename the files to whatever trigger word you want to use. They were trained with kc8, kc16, kc32, but any trigger word should work.
The knollingcase style is considered to be a concept inside a sleek (sometimes scifi) display case with transparent walls, and a minimalistic background.
Suggested prompts:
```
<concept>, micro-details, photorealism, photorealistic, <kc-vx-iter>
photorealistic <concept>, very detailed, scifi case, <kc-vx-iter>
<concept>, very detailed, scifi transparent case, <kc-vx-iter>
```
Suggested negative prompts:
```
blurry, toy, cartoon, animated, underwater, photoshop
```
Suggested samplers:
DPM++ SDE Karras (used for the example images) or DPM++ 2S a Karras
| 5fc10c59f8873681534fb36e654483b3 |
morenolq/bart-it-WITS | morenolq | bart | 14 | 254 | transformers | 0 | text2text-generation | true | false | false | mit | ['it'] | ['Silvia/WITS'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bart', 'pytorch'] | false | true | true | 2,351 | false |
# BART-IT - WITS
BART-IT is a sequence-to-sequence model, based on the BART architecture that is specifically tailored to the Italian language. The model is pre-trained on a [large corpus of Italian text](https://huggingface.co/datasets/gsarti/clean_mc4_it), and can be fine-tuned on a variety of tasks.
## Model description
The model is a `base-`sized BART model, with a vocabulary size of 52,000 tokens. It has 140M parameters and can be used for any task that requires a sequence-to-sequence model. It is trained from scratch on a large corpus of Italian text, and can be fine-tuned on a variety of tasks.
## Pre-training
The code used to pre-train BART-IT together with additional information on model parameters can be found [here](https://github.com/MorenoLaQuatra/bart-it).
## Fine-tuning
The model has been fine-tuned for the abstractive summarization task on 3 different Italian datasets:
- [FanPage](https://huggingface.co/datasets/ARTeLab/fanpage) - finetuned model [here](https://huggingface.co/MorenoLaQuatra/bart-it-fanpage)
- [IlPost](https://huggingface.co/datasets/ARTeLab/ilpost) - finetuned model [here](https://huggingface.co/morenolq/bart-it-ilpost)
- **This model** [WITS](https://huggingface.co/datasets/Silvia/WITS) - finetuned model [here](https://huggingface.co/morenolq/bart-it-WITS)
## Usage
In order to use the model, you can use the following code:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("morenolq/bart-it-WITS")
model = AutoModelForSeq2SeqLM.from_pretrained("morenolq/bart-it-WITS")
input_ids = tokenizer.encode("Il modello BART-IT è stato pre-addestrato su un corpus di testo italiano", return_tensors="pt")
outputs = model.generate(input_ids, max_length=40, num_beams=4, early_stopping=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
# Citation
If you find this model useful for your research, please cite the following paper:
```bibtex
@Article{BARTIT,
AUTHOR = {La Quatra, Moreno and Cagliero, Luca},
TITLE = {BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization},
JOURNAL = {Future Internet},
VOLUME = {15},
YEAR = {2023},
NUMBER = {1},
ARTICLE-NUMBER = {15},
URL = {https://www.mdpi.com/1999-5903/15/1/15},
ISSN = {1999-5903},
DOI = {10.3390/fi15010015}
}
```
| 578fda9b1f17195c2d13bfe7c97c306b |
ProceduralTree/HW3 | ProceduralTree | distilbert | 12 | 1 | transformers | 0 | multiple-choice | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,274 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HW3
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3128
- Accuracy: 0.3355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3467 | 1.0 | 1504 | 1.3195 | 0.3174 |
| 1.3042 | 2.0 | 3008 | 1.3128 | 0.3355 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 8ce8b666c6eeb0b5e9adb66c35317b8c |
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_qnli | gokuls | mobilebert | 19 | 0 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,613 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_qnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1420
- Accuracy: 0.5923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.6899 | 1.0 | 33208 | 1.1420 | 0.5923 |
| 0.498 | 2.0 | 66416 | 1.2196 | 0.5944 |
| 0.4209 | 3.0 | 99624 | 1.2370 | 0.5977 |
| 0.3746 | 4.0 | 132832 | 1.2784 | 0.5973 |
| 0.3449 | 5.0 | 166040 | 1.2649 | 0.5938 |
| 0.3238 | 6.0 | 199248 | 1.1662 | 0.6114 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| ca99b7293272c01b14bf2dea8b5691cf |
debatelab/argument-analyst | debatelab | t5 | 8 | 56 | transformers | 0 | text2text-generation | true | false | false | cc-by-sa-4.0 | ['en'] | ['debatelab/aaac'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,380 | false |
Pretraining Dataset: [AAAC01](https://huggingface.co/datasets/debatelab/aaac)
Demo: [DeepA2 Demo](https://huggingface.co/spaces/debatelab/deepa2-demo)
Paper: [DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models](https://arxiv.org/abs/2110.01509)
Authors: *Gregor Betz, Kyle Richardson*
## Abstract
In this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst -- a T5 model (Raffel et al. 2020) set up and trained within DeepA2 -- reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synthetic corpus for deep argument analysis, and evaluate ArgumentAnalyst on this new dataset as well as on existing data, specifically EntailmentBank (Dalvi et al. 2021). Our empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics (such as hermeneutic cycles), to explore the model's uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence. | 5b7d83206fe5b21d580602efbb330254 |
levinlab/neuroscience-to-dev-bio-4 | levinlab | bart | 11 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,436 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# neuroscience-to-dev-bio-4
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 18.5056 | 0.97 | 8 | 18.2694 |
| 15.7993 | 1.97 | 16 | 14.5706 |
| 12.8347 | 2.97 | 24 | 12.0677 |
| 11.5971 | 3.97 | 32 | 10.8629 |
| 10.463 | 4.97 | 40 | 9.3275 |
| 8.9798 | 5.97 | 48 | 7.0959 |
| 7.2515 | 6.97 | 56 | 5.9271 |
| 6.2773 | 7.97 | 64 | 5.3001 |
| 5.636 | 8.97 | 72 | 4.7396 |
| 5.0218 | 9.97 | 80 | 4.1504 |
| 4.3526 | 10.97 | 88 | 3.4576 |
| 3.5813 | 11.97 | 96 | 2.6589 |
| 2.7243 | 12.97 | 104 | 1.7789 |
| 1.7997 | 13.97 | 112 | 0.9672 |
| 0.995 | 14.97 | 120 | 0.4184 |
| 0.4459 | 15.97 | 128 | 0.1611 |
| 0.1844 | 16.97 | 136 | 0.0645 |
| 0.077 | 17.97 | 144 | 0.0292 |
| 0.0332 | 18.97 | 152 | 0.0212 |
| 0.0197 | 19.97 | 160 | 0.0187 |
| 0.0151 | 20.97 | 168 | 0.0169 |
| 0.0245 | 21.97 | 176 | 0.0160 |
| 0.0099 | 22.97 | 184 | 0.0206 |
| 0.0094 | 23.97 | 192 | 0.0158 |
| 0.0082 | 24.97 | 200 | 0.0170 |
| 0.0063 | 25.97 | 208 | 0.0159 |
| 0.0075 | 26.97 | 216 | 0.0169 |
| 0.0059 | 27.97 | 224 | 0.0154 |
| 0.0047 | 28.97 | 232 | 0.0164 |
| 0.0045 | 29.97 | 240 | 0.0181 |
| 0.0037 | 30.97 | 248 | 0.0192 |
| 0.0038 | 31.97 | 256 | 0.0160 |
| 0.0045 | 32.97 | 264 | 0.0162 |
| 0.0056 | 33.97 | 272 | 0.0150 |
| 0.0043 | 34.97 | 280 | 0.0149 |
| 0.0036 | 35.97 | 288 | 0.0155 |
| 0.0032 | 36.97 | 296 | 0.0183 |
| 0.0032 | 37.97 | 304 | 0.0158 |
| 0.0028 | 38.97 | 312 | 0.0155 |
| 0.0032 | 39.97 | 320 | 0.0160 |
| 0.0027 | 40.97 | 328 | 0.0180 |
| 0.0033 | 41.97 | 336 | 0.0164 |
| 0.0035 | 42.97 | 344 | 0.0211 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 60bb7f0ba4d806cdbaf3fbdc2a19045b |
jonatasgrosman/exp_w2v2t_es_unispeech_s990 | jonatasgrosman | unispeech | 10 | 4 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['es'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'es'] | false | true | true | 469 | false | # exp_w2v2t_es_unispeech_s990
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 9ebff4f1029679b63c999add9613670e |
Helsinki-NLP/opus-mt-en-az | Helsinki-NLP | marian | 11 | 39 | transformers | 0 | translation | true | true | false | apache-2.0 | ['en', 'az'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,016 | false |
### eng-aze
* source group: English
* target group: Azerbaijani
* OPUS readme: [eng-aze](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aze/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): aze_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.aze | 18.6 | 0.477 |
### System Info:
- hf_name: eng-aze
- source_languages: eng
- target_languages: aze
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aze/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'az']
- src_constituents: {'eng'}
- tgt_constituents: {'aze_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.test.txt
- src_alpha3: eng
- tgt_alpha3: aze
- short_pair: en-az
- chrF2_score: 0.47700000000000004
- bleu: 18.6
- brevity_penalty: 1.0
- ref_len: 13012.0
- src_name: English
- tgt_name: Azerbaijani
- train_date: 2020-06-16
- src_alpha2: en
- tgt_alpha2: az
- prefer_old: False
- long_pair: eng-aze
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | d5a818e44ee6c15b14ed1a237c3c547d |
espnet/simpleoier_librilight_limited_asr_train_asr_hubert_base_10h_finetuning_raw_en_char | espnet | null | 20 | 8 | espnet | 0 | automatic-speech-recognition | false | false | false | cc-by-4.0 | ['en'] | ['librilight_limited'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | true | true | 6,844 | false |
## ESPnet2 ASR model
### `simpleoier/simpleoier_librilight_limited_asr_train_asr_hubert_base_10h_finetuning_raw_en_char`
This model was trained by simpleoier using librilight_limited recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 6752c23c61c95c9c8ba8547eab14cbd9b38d18e7
pip install -e .
cd egs2/librilight_limited/asr1
./run.sh --skip_data_prep false --skip_train true --download_model simpleoier/simpleoier_librilight_limited_asr_train_asr_hubert_base_10h_finetuning_raw_en_char
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Jan 10 17:33:54 EST 2023`
- python version: `3.9.15 (main, Nov 4 2022, 16:13:54) [GCC 11.2.0]`
- espnet version: `espnet 202209`
- pytorch version: `pytorch 1.12.1`
- Git hash: ``
- Commit date: ``
## asr_train_asr_hubert_base_10h_finetuning_raw_en_char
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.loss.ave/dev_clean|2694|53635|90.3|9.3|0.5|0.7|10.4|74.8|
|decode_asr_model_valid.loss.ave/dev_other|2864|50948|83.8|15.1|1.1|1.2|17.4|83.9|
|decode_asr_model_valid.loss.ave/test_clean|2620|52576|90.2|9.4|0.4|0.7|10.5|75.2|
|decode_asr_model_valid.loss.ave/test_other|2939|52343|83.6|15.2|1.1|1.3|17.6|85.3|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.loss.ave/dev_clean|2694|284127|97.8|1.2|1.0|0.8|3.0|74.8|
|decode_asr_model_valid.loss.ave/dev_other|2864|265951|95.4|2.5|2.0|1.5|6.1|83.9|
|decode_asr_model_valid.loss.ave/test_clean|2620|281530|97.8|1.2|1.0|0.8|3.0|75.2|
|decode_asr_model_valid.loss.ave/test_other|2939|272758|95.5|2.5|2.0|1.6|6.1|85.3|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_hubert_base_10h_finetuning.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_hubert_base_10h_finetuning_raw_en_char
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- ../../librispeech/ssl1/exp/hubert_iter1_train_ssl_torchaudiohubert_base_960h_pretrain_it1_raw/valid.loss.ave.pth:encoder:encoder
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 3200000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_char/train/speech_shape
- exp/asr_stats_raw_en_char/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_en_char/valid/speech_shape
- exp/asr_stats_raw_en_char/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_10h/wav.scp
- speech
- sound
- - dump/raw/train_10h/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_clean/wav.scp
- speech
- sound
- - dump/raw/dev_clean/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 5.0e-05
scheduler: warmuplr
scheduler_conf:
warmup_steps: 8000
token_list:
- <blank>
- <unk>
- <space>
- E
- T
- A
- O
- N
- I
- H
- S
- R
- D
- L
- U
- M
- W
- C
- F
- G
- Y
- P
- B
- V
- K
- ''''
- X
- J
- Q
- Z
- <sos/eos>
init: xavier_uniform
input_size: 1
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: null
frontend_conf: {}
specaug: null
specaug_conf: {}
normalize: null
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 1.0
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: torchaudiohubert
encoder_conf:
encoder_projection_dropout: 0.0
encoder_attention_dropout: 0.0
encoder_ff_interm_dropout: 0.1
encoder_dropout: 0.0
encoder_layer_drop: 0.05
mask_prob: 0.65
mask_channel_prob: 0.5
mask_channel_length: 64
num_classes: 500
finetuning: true
freeze_encoder_updates: 10000
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202209'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| f1007b095e60af937ff8579879a5dc3a |
Yoshiki/chime4_enh_asr1_wpd_wavlm_conformer | Yoshiki | null | 26 | 2 | espnet | 0 | null | false | false | false | cc-by-4.0 | ['en'] | ['chime4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'speech-enhancement-recognition'] | false | true | true | 11,116 | false |
## ESPnet2 EnhS2T model
### `Yoshiki/chime4_enh_asr1_wpd_wavlm_conformer`
This model was trained by Yoshiki using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
8ed83f45d5aa2ca6b3635e44b9c29afb9b5fb600
pip install -e .
cd egs2/chime4/enh_asr1
./run.sh --skip_data_prep false --skip_train true --download_model Yoshiki/chime4_enh_asr1_wpd_wavlm_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Oct 11 02:40:53 UTC 2022`
- python version: `3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]`
- espnet version: `espnet 202207`
- pytorch version: `pytorch 1.10.1+cu111`
- Git hash: ``
- Commit date: ``
## enh_asr_train_enh_asr_wpd_init_noenhloss_wavlm_conformer_raw_en_char
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/dt05_real_isolated_6ch_track|1640|27119|98.8|0.9|0.2|0.2|1.3|16.2|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/dt05_simu_isolated_6ch_track|1640|27120|98.9|0.9|0.2|0.1|1.3|15.2|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/et05_real_isolated_6ch_track|1320|21409|98.4|1.4|0.2|0.2|1.8|20.6|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/et05_simu_isolated_6ch_track|1320|21416|98.9|1.0|0.2|0.1|1.2|15.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/dt05_real_isolated_6ch_track|1640|160390|99.7|0.1|0.2|0.2|0.5|16.2|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/dt05_simu_isolated_6ch_track|1640|160400|99.7|0.1|0.2|0.1|0.5|15.2|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/et05_real_isolated_6ch_track|1320|126796|99.5|0.2|0.3|0.2|0.7|20.6|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/et05_simu_isolated_6ch_track|1320|126812|99.7|0.2|0.2|0.1|0.5|15.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## EnhS2T config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_asr_wpd_init_noenhloss_wavlm_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/enh_asr_train_enh_asr_wpd_init_noenhloss_wavlm_conformer_raw_en_char
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 31
patience: 10
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
- - train
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- ../enh1/exp/enh_train_enh_beamformer_wpd_ci_sdr_shorttap_raw/valid.loss.best.pth:separator:enh_model.separator
- ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:frontend:s2t_model.frontend
- ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:preencoder:s2t_model.preencoder
- ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:encoder:s2t_model.encoder
- ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:ctc:s2t_model.ctc
- ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:decoder:s2t_model.decoder
ignore_init_mismatch: false
freeze_param:
- s2t_model.frontend.upstream
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_asr_stats_raw_en_char/train/speech_shape
- exp/enh_asr_stats_raw_en_char/train/speech_ref1_shape
- exp/enh_asr_stats_raw_en_char/train/text_spk1_shape.char
valid_shape_file:
- exp/enh_asr_stats_raw_en_char/valid/speech_shape
- exp/enh_asr_stats_raw_en_char/valid/speech_ref1_shape
- exp/enh_asr_stats_raw_en_char/valid/text_spk1_shape.char
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr05_multi_isolated_6ch_track/wav.scp
- speech
- sound
- - dump/raw/tr05_multi_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr05_multi_isolated_6ch_track/text_spk1
- text_spk1
- text
valid_data_path_and_name_and_type:
- - dump/raw/dt05_multi_isolated_6ch_track/wav.scp
- speech
- sound
- - dump/raw/dt05_multi_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/dt05_multi_isolated_6ch_track/text_spk1
- text_spk1
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: sgd
optim_conf:
lr: 0.001
momentum: 0.9
scheduler: null
scheduler_conf: {}
token_list: data/en_token_list/char/tokens.txt
src_token_list: null
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
enh_criterions:
- name: ci_sdr
conf:
filter_length: 512
wrapper: fixed_order
wrapper_conf:
weight: 0.1
diar_num_spk: null
diar_input_size: null
enh_model_conf:
stft_consistency: false
loss_type: mask_mse
mask_type: null
asr_model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
st_model_conf:
stft_consistency: false
loss_type: mask_mse
mask_type: null
diar_model_conf:
diar_weight: 1.0
attractor_weight: 1.0
subtask_series:
- enh
- asr
model_conf:
calc_enh_loss: false
bypass_enh_prob: 0.0
use_preprocessor: true
token_type: char
bpemodel: null
src_token_type: bpe
src_bpemodel: null
non_linguistic_symbols: data/nlsyms.txt
cleaner: null
g2p: null
text_name:
- text_spk1
enh_encoder: stft
enh_encoder_conf:
n_fft: 512
win_length: 400
hop_length: 128
use_builtin_complex: false
enh_separator: wpe_beamformer
enh_separator_conf:
num_spk: 1
loss_type: spectrum
use_wpe: false
wnet_type: blstmp
wlayers: 3
wunits: 512
wprojs: 512
wdropout_rate: 0.0
taps: 3
delay: 3
use_dnn_mask_for_wpe: true
use_beamformer: true
bnet_type: blstmp
blayers: 3
bunits: 512
bprojs: 512
badim: 320
ref_channel: 4
use_noise_mask: true
beamformer_type: wpd_souden
bdropout_rate: 0.0
enh_decoder: stft
enh_decoder_conf:
n_fft: 512
win_length: 400
hop_length: 128
enh_mask_module: multi_mask
enh_mask_module_conf: {}
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_large
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 100
num_freq_mask: 4
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
asr_preencoder: linear
asr_preencoder_conf:
input_size: 1024
output_size: 80
asr_encoder: conformer
asr_encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d2
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 15
asr_postencoder: null
asr_postencoder_conf: {}
asr_decoder: transformer
asr_decoder_conf:
input_layer: embed
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
st_preencoder: null
st_preencoder_conf: {}
st_encoder: rnn
st_encoder_conf: {}
st_postencoder: null
st_postencoder_conf: {}
st_decoder: rnn
st_decoder_conf: {}
st_extra_asr_decoder: rnn
st_extra_asr_decoder_conf: {}
st_extra_mt_decoder: rnn
st_extra_mt_decoder_conf: {}
diar_frontend: default
diar_frontend_conf: {}
diar_specaug: null
diar_specaug_conf: {}
diar_normalize: utterance_mvn
diar_normalize_conf: {}
diar_encoder: transformer
diar_encoder_conf: {}
diar_decoder: linear
diar_decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf: {}
diar_attractor: null
diar_attractor_conf: {}
required:
- output_dir
version: '202207'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 9fd4213178bdf0b4f44de2465f0c2d55 |
jonatasgrosman/exp_w2v2t_pl_unispeech-ml_s362 | jonatasgrosman | unispeech | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pl'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'pl'] | false | true | true | 500 | false | # exp_w2v2t_pl_unispeech-ml_s362
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 500a57d00c3b0360671b14b9ec101257 |
Perselope/thesis-audio-1 | Perselope | wav2vec2 | 16 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,586 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thesis-audio-1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4268
- Wer: 0.3395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4633 | 4.0 | 500 | 1.4892 | 1.0006 |
| 0.5377 | 8.0 | 1000 | 0.4046 | 0.4163 |
| 0.1818 | 12.0 | 1500 | 0.4255 | 0.3850 |
| 0.1024 | 16.0 | 2000 | 0.4574 | 0.3644 |
| 0.0723 | 20.0 | 2500 | 0.4412 | 0.3550 |
| 0.0542 | 24.0 | 3000 | 0.4095 | 0.3404 |
| 0.0434 | 28.0 | 3500 | 0.4268 | 0.3395 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
| e1b09acb376df79c7c2a856bda793d6d |
Kpontilala/Fine_tuned_distil_Based | Kpontilala | distilbert | 14 | 10 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,107 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine_tuned_distil_Based
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7217
- eval_accuracy: 0.7317
- eval_runtime: 48.8174
- eval_samples_per_second: 61.454
- eval_steps_per_second: 7.682
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| d80ca9e4ad9b15dbe8687f7612564776 |
din0s/t5-base-pt-asqa-cb | din0s | t5 | 10 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,467 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-pt-asqa-cb
This model is a fine-tuned version of [din0s/t5-base-msmarco-nlgen-cb](https://huggingface.co/din0s/t5-base-msmarco-nlgen-cb) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7735
- Rougelsum: 26.3056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| No log | 1.0 | 273 | 2.9031 | 24.6325 |
| 3.2031 | 2.0 | 546 | 2.8656 | 24.9190 |
| 3.2031 | 3.0 | 819 | 2.8442 | 25.1197 |
| 3.0839 | 4.0 | 1092 | 2.8303 | 25.2855 |
| 3.0839 | 5.0 | 1365 | 2.8189 | 25.4891 |
| 3.0276 | 6.0 | 1638 | 2.8099 | 25.6116 |
| 3.0276 | 7.0 | 1911 | 2.8036 | 25.7411 |
| 3.0043 | 8.0 | 2184 | 2.7976 | 25.8238 |
| 3.0043 | 9.0 | 2457 | 2.7930 | 25.9201 |
| 2.9791 | 10.0 | 2730 | 2.7890 | 26.0322 |
| 2.9545 | 11.0 | 3003 | 2.7851 | 26.0934 |
| 2.9545 | 12.0 | 3276 | 2.7826 | 26.1574 |
| 2.9344 | 13.0 | 3549 | 2.7802 | 26.2041 |
| 2.9344 | 14.0 | 3822 | 2.7785 | 26.2330 |
| 2.9252 | 15.0 | 4095 | 2.7769 | 26.2394 |
| 2.9252 | 16.0 | 4368 | 2.7756 | 26.2676 |
| 2.9109 | 17.0 | 4641 | 2.7747 | 26.2864 |
| 2.9109 | 18.0 | 4914 | 2.7740 | 26.3146 |
| 2.9103 | 19.0 | 5187 | 2.7736 | 26.2993 |
| 2.9103 | 20.0 | 5460 | 2.7735 | 26.3056 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
| 3d4f4124acab30762d030968ecb68feb |
emigomez/vit-classify-manipulations-v2-2 | emigomez | vit | 14 | 27 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'generated_from_trainer'] | true | true | true | 10,759 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-classify-manipulations-ft
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the vit-classify-manipulations-v2-2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2064
- Accuracy: 0.9067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2543 | 0.66 | 100 | 0.3142 | 0.8657 |
| 0.1982 | 1.32 | 200 | 0.3589 | 0.8694 |
| 0.3137 | 1.99 | 300 | 0.2064 | 0.9067 |
| 0.1667 | 2.65 | 400 | 0.2949 | 0.8955 |
| 0.1258 | 3.31 | 500 | 0.2742 | 0.9104 |
| 0.1316 | 3.97 | 600 | 0.3565 | 0.8843 |
| 0.099 | 4.64 | 700 | 0.2812 | 0.9216 |
| 0.0222 | 5.3 | 800 | 0.4249 | 0.8881 |
| 0.0728 | 5.96 | 900 | 0.3928 | 0.8955 |
| 0.0797 | 6.62 | 1000 | 0.3776 | 0.8955 |
| 0.0035 | 7.28 | 1100 | 0.4553 | 0.9030 |
| 0.0543 | 7.95 | 1200 | 0.4734 | 0.8843 |
| 0.0118 | 8.61 | 1300 | 0.3744 | 0.9104 |
| 0.0117 | 9.27 | 1400 | 0.5624 | 0.8843 |
| 0.0304 | 9.93 | 1500 | 0.4978 | 0.8881 |
| 0.0465 | 10.6 | 1600 | 0.4483 | 0.9104 |
| 0.0527 | 11.26 | 1700 | 0.5409 | 0.8806 |
| 0.0164 | 11.92 | 1800 | 0.5570 | 0.8955 |
| 0.0367 | 12.58 | 1900 | 0.5798 | 0.8955 |
| 0.0109 | 13.25 | 2000 | 0.5195 | 0.8843 |
| 0.0725 | 13.91 | 2100 | 0.3585 | 0.9067 |
| 0.0536 | 14.57 | 2200 | 0.4510 | 0.8918 |
| 0.0234 | 15.23 | 2300 | 0.4542 | 0.8843 |
| 0.1024 | 15.89 | 2400 | 0.4382 | 0.8955 |
| 0.0359 | 16.56 | 2500 | 0.5638 | 0.8918 |
| 0.0028 | 17.22 | 2600 | 0.4226 | 0.9216 |
| 0.0359 | 17.88 | 2700 | 0.4038 | 0.9216 |
| 0.0286 | 18.54 | 2800 | 0.3682 | 0.9030 |
| 0.0007 | 19.21 | 2900 | 0.4494 | 0.9291 |
| 0.0011 | 19.87 | 3000 | 0.5247 | 0.8993 |
| 0.0012 | 20.53 | 3100 | 0.4359 | 0.9142 |
| 0.0137 | 21.19 | 3200 | 0.4726 | 0.9179 |
| 0.0007 | 21.85 | 3300 | 0.6007 | 0.8955 |
| 0.0269 | 22.52 | 3400 | 0.3105 | 0.9216 |
| 0.0241 | 23.18 | 3500 | 0.3852 | 0.9030 |
| 0.1016 | 23.84 | 3600 | 0.4785 | 0.9142 |
| 0.0015 | 24.5 | 3700 | 0.5085 | 0.9104 |
| 0.0454 | 25.17 | 3800 | 0.6256 | 0.8731 |
| 0.0005 | 25.83 | 3900 | 0.5935 | 0.8993 |
| 0.0535 | 26.49 | 4000 | 0.5092 | 0.8918 |
| 0.0156 | 27.15 | 4100 | 0.7119 | 0.8806 |
| 0.0983 | 27.81 | 4200 | 0.4342 | 0.9104 |
| 0.0008 | 28.48 | 4300 | 0.5630 | 0.8955 |
| 0.0337 | 29.14 | 4400 | 0.4331 | 0.9216 |
| 0.0076 | 29.8 | 4500 | 0.5523 | 0.9067 |
| 0.0002 | 30.46 | 4600 | 0.5822 | 0.9104 |
| 0.0447 | 31.13 | 4700 | 0.4997 | 0.9142 |
| 0.0038 | 31.79 | 4800 | 0.4663 | 0.9179 |
| 0.001 | 32.45 | 4900 | 0.5102 | 0.8955 |
| 0.0037 | 33.11 | 5000 | 0.6439 | 0.8955 |
| 0.0009 | 33.77 | 5100 | 1.0129 | 0.8507 |
| 0.032 | 34.44 | 5200 | 0.5261 | 0.9030 |
| 0.0002 | 35.1 | 5300 | 0.6782 | 0.8993 |
| 0.0002 | 35.76 | 5400 | 0.6949 | 0.8918 |
| 0.0002 | 36.42 | 5500 | 0.6965 | 0.8955 |
| 0.0001 | 37.09 | 5600 | 0.6989 | 0.8993 |
| 0.0001 | 37.75 | 5700 | 0.7056 | 0.8955 |
| 0.0063 | 38.41 | 5800 | 0.7139 | 0.8955 |
| 0.0001 | 39.07 | 5900 | 0.7213 | 0.8955 |
| 0.0001 | 39.74 | 6000 | 0.7295 | 0.8993 |
| 0.0037 | 40.4 | 6100 | 0.7311 | 0.8955 |
| 0.0001 | 41.06 | 6200 | 0.7352 | 0.8955 |
| 0.0001 | 41.72 | 6300 | 0.7436 | 0.8955 |
| 0.0001 | 42.38 | 6400 | 0.7498 | 0.8955 |
| 0.0001 | 43.05 | 6500 | 0.7498 | 0.8955 |
| 0.0001 | 43.71 | 6600 | 0.7496 | 0.8993 |
| 0.0001 | 44.37 | 6700 | 0.7522 | 0.8993 |
| 0.0093 | 45.03 | 6800 | 0.7593 | 0.8955 |
| 0.0001 | 45.7 | 6900 | 0.7647 | 0.8955 |
| 0.0053 | 46.36 | 7000 | 0.7675 | 0.8955 |
| 0.0001 | 47.02 | 7100 | 0.7622 | 0.8993 |
| 0.0001 | 47.68 | 7200 | 0.7700 | 0.8955 |
| 0.0001 | 48.34 | 7300 | 0.7781 | 0.8955 |
| 0.0 | 49.01 | 7400 | 0.7741 | 0.8955 |
| 0.0 | 49.67 | 7500 | 0.7692 | 0.8993 |
| 0.0 | 50.33 | 7600 | 0.7770 | 0.8993 |
| 0.0 | 50.99 | 7700 | 0.7769 | 0.8993 |
| 0.0 | 51.66 | 7800 | 0.7871 | 0.8955 |
| 0.0 | 52.32 | 7900 | 0.7943 | 0.8955 |
| 0.0 | 52.98 | 8000 | 0.7936 | 0.8955 |
| 0.0 | 53.64 | 8100 | 0.7913 | 0.8955 |
| 0.0 | 54.3 | 8200 | 0.7855 | 0.8993 |
| 0.0 | 54.97 | 8300 | 0.7973 | 0.8955 |
| 0.0076 | 55.63 | 8400 | 0.7999 | 0.8955 |
| 0.0 | 56.29 | 8500 | 0.7915 | 0.8955 |
| 0.0 | 56.95 | 8600 | 0.7897 | 0.8993 |
| 0.0 | 57.62 | 8700 | 0.8058 | 0.8955 |
| 0.0 | 58.28 | 8800 | 0.7940 | 0.8993 |
| 0.0 | 58.94 | 8900 | 0.7937 | 0.8993 |
| 0.0 | 59.6 | 9000 | 0.7901 | 0.9030 |
| 0.0 | 60.26 | 9100 | 0.8114 | 0.8955 |
| 0.0 | 60.93 | 9200 | 0.8071 | 0.8955 |
| 0.0 | 61.59 | 9300 | 0.8013 | 0.8993 |
| 0.0046 | 62.25 | 9400 | 0.8147 | 0.8955 |
| 0.0 | 62.91 | 9500 | 0.8097 | 0.8993 |
| 0.0 | 63.58 | 9600 | 0.8180 | 0.8993 |
| 0.0 | 64.24 | 9700 | 0.8144 | 0.8955 |
| 0.0 | 64.9 | 9800 | 0.8224 | 0.8955 |
| 0.0 | 65.56 | 9900 | 0.8283 | 0.8955 |
| 0.0 | 66.23 | 10000 | 0.8273 | 0.8955 |
| 0.0 | 66.89 | 10100 | 0.8339 | 0.8993 |
| 0.0 | 67.55 | 10200 | 0.8225 | 0.8955 |
| 0.0041 | 68.21 | 10300 | 0.8342 | 0.8993 |
| 0.0 | 68.87 | 10400 | 0.8227 | 0.8955 |
| 0.0 | 69.54 | 10500 | 0.8293 | 0.8955 |
| 0.0 | 70.2 | 10600 | 0.8267 | 0.8993 |
| 0.0 | 70.86 | 10700 | 0.8262 | 0.8993 |
| 0.004 | 71.52 | 10800 | 0.8266 | 0.8993 |
| 0.0 | 72.19 | 10900 | 0.8373 | 0.8955 |
| 0.0 | 72.85 | 11000 | 0.8402 | 0.8993 |
| 0.0 | 73.51 | 11100 | 0.8453 | 0.8993 |
| 0.0 | 74.17 | 11200 | 0.8470 | 0.8955 |
| 0.0 | 74.83 | 11300 | 0.8496 | 0.8993 |
| 0.0 | 75.5 | 11400 | 0.8427 | 0.8993 |
| 0.0041 | 76.16 | 11500 | 0.8564 | 0.8993 |
| 0.0 | 76.82 | 11600 | 0.8508 | 0.8993 |
| 0.0 | 77.48 | 11700 | 0.8452 | 0.8993 |
| 0.0 | 78.15 | 11800 | 0.8498 | 0.8993 |
| 0.0 | 78.81 | 11900 | 0.8482 | 0.8993 |
| 0.0 | 79.47 | 12000 | 0.8565 | 0.8993 |
| 0.0 | 80.13 | 12100 | 0.8551 | 0.8993 |
| 0.0044 | 80.79 | 12200 | 0.8622 | 0.8993 |
| 0.0 | 81.46 | 12300 | 0.8532 | 0.8993 |
| 0.0 | 82.12 | 12400 | 0.8612 | 0.8993 |
| 0.0048 | 82.78 | 12500 | 0.8667 | 0.8993 |
| 0.0 | 83.44 | 12600 | 0.8603 | 0.8993 |
| 0.0 | 84.11 | 12700 | 0.8680 | 0.8993 |
| 0.0 | 84.77 | 12800 | 0.8653 | 0.8993 |
| 0.0 | 85.43 | 12900 | 0.8808 | 0.8993 |
| 0.0 | 86.09 | 13000 | 0.8697 | 0.8993 |
| 0.0 | 86.75 | 13100 | 0.8727 | 0.8993 |
| 0.0 | 87.42 | 13200 | 0.8833 | 0.8993 |
| 0.0 | 88.08 | 13300 | 0.8734 | 0.9030 |
| 0.0049 | 88.74 | 13400 | 0.8734 | 0.9030 |
| 0.0 | 89.4 | 13500 | 0.8862 | 0.8993 |
| 0.0 | 90.07 | 13600 | 0.8776 | 0.8993 |
| 0.0 | 90.73 | 13700 | 0.8796 | 0.9030 |
| 0.0 | 91.39 | 13800 | 0.8816 | 0.8993 |
| 0.0 | 92.05 | 13900 | 0.8825 | 0.8993 |
| 0.0 | 92.72 | 14000 | 0.8846 | 0.8993 |
| 0.0 | 93.38 | 14100 | 0.8930 | 0.8993 |
| 0.0 | 94.04 | 14200 | 0.8852 | 0.8993 |
| 0.0 | 94.7 | 14300 | 0.8833 | 0.9030 |
| 0.0 | 95.36 | 14400 | 0.8883 | 0.8993 |
| 0.0 | 96.03 | 14500 | 0.8877 | 0.8993 |
| 0.0043 | 96.69 | 14600 | 0.8918 | 0.8993 |
| 0.0 | 97.35 | 14700 | 0.8872 | 0.9030 |
| 0.0 | 98.01 | 14800 | 0.8891 | 0.8993 |
| 0.0 | 98.68 | 14900 | 0.8904 | 0.8993 |
| 0.0044 | 99.34 | 15000 | 0.8910 | 0.8993 |
| 0.0 | 100.0 | 15100 | 0.8914 | 0.8993 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| fa6b7f5836cbe60db6605a00f1b7f528 |
mattchurgin/distilbert-mrpc | mattchurgin | distilbert | 10 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,091 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-mrpc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6783
- Accuracy: 0.8480
- F1: 0.8935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5916 | 0.22 | 100 | 0.5676 | 0.7157 | 0.8034 |
| 0.5229 | 0.44 | 200 | 0.4534 | 0.7770 | 0.8212 |
| 0.5055 | 0.65 | 300 | 0.4037 | 0.8137 | 0.8762 |
| 0.4597 | 0.87 | 400 | 0.3706 | 0.8407 | 0.8893 |
| 0.4 | 1.09 | 500 | 0.4590 | 0.8113 | 0.8566 |
| 0.3498 | 1.31 | 600 | 0.4196 | 0.8554 | 0.8974 |
| 0.2916 | 1.53 | 700 | 0.4606 | 0.8554 | 0.8933 |
| 0.3309 | 1.74 | 800 | 0.5162 | 0.8578 | 0.9027 |
| 0.3788 | 1.96 | 900 | 0.3911 | 0.8529 | 0.8980 |
| 0.2059 | 2.18 | 1000 | 0.5842 | 0.8554 | 0.8995 |
| 0.1595 | 2.4 | 1100 | 0.5701 | 0.8578 | 0.8975 |
| 0.1205 | 2.61 | 1200 | 0.6905 | 0.8407 | 0.8889 |
| 0.174 | 2.83 | 1300 | 0.6783 | 0.8480 | 0.8935 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
| 9b771ce8c7bd39a5ae3dd5f0e8efb03c |
ogimgio/bert-base-german-cased-finetuned-200labels-notrandom | ogimgio | bert | 10 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,094 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-200labels-notrandom
This model is a fine-tuned version of [ogimgio/bert-base-german-cased-finetuned-7labels](https://huggingface.co/ogimgio/bert-base-german-cased-finetuned-7labels) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1007
- Micro f1: 0.1030
- Macro f1: 0.0788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro f1 | Macro f1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.8788 | 1.0 | 1380 | 0.8018 | 0.0855 | 0.0920 |
| 0.6458 | 2.0 | 2760 | 0.5829 | 0.0884 | 0.0939 |
| 0.4631 | 3.0 | 4140 | 0.4213 | 0.0942 | 0.0963 |
| 0.3375 | 4.0 | 5520 | 0.3143 | 0.1044 | 0.0997 |
| 0.2539 | 5.0 | 6900 | 0.2436 | 0.1091 | 0.1018 |
| 0.1987 | 6.0 | 8280 | 0.1944 | 0.1098 | 0.1003 |
| 0.1598 | 7.0 | 9660 | 0.1592 | 0.1094 | 0.0964 |
| 0.1326 | 8.0 | 11040 | 0.1349 | 0.1097 | 0.0937 |
| 0.1148 | 9.0 | 12420 | 0.1185 | 0.1089 | 0.0894 |
| 0.1025 | 10.0 | 13800 | 0.1077 | 0.1066 | 0.0839 |
| 0.0946 | 11.0 | 15180 | 0.1007 | 0.1030 | 0.0788 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.8.0
- Tokenizers 0.12.1
| 62d2fc24d50f7853b96907ee91f0d056 |
leminhds/distilbert-base-uncased-finetuned-emotion | leminhds | distilbert | 13 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,159 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1677
- eval_accuracy: 0.924
- eval_f1: 0.9238
- eval_runtime: 2.5188
- eval_samples_per_second: 794.026
- eval_steps_per_second: 12.704
- epoch: 1.0
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
| cd49b9d845ce578c2ffa80ce495de6c0 |
anas-awadalla/roberta-large-few-shot-k-1024-finetuned-squad-seed-0 | anas-awadalla | roberta | 17 | 3 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 984 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| 8cf0d4f5964d0bcae02b057516c509ed |
stanfordnlp/stanza-es | stanfordnlp | null | 23 | 1,497 | stanza | 0 | token-classification | false | false | false | apache-2.0 | ['es'] | null | null | 0 | 0 | 0 | 0 | 1 | 0 | 1 | ['stanza', 'token-classification'] | false | true | true | 580 | false | # Stanza model for Spanish (es)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2022-10-27 07:41:48.905
| c2477082545d5a351bb29dd4ef793806 |
sd-concepts-library/meze-audio-elite-headphones | sd-concepts-library | null | 12 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | [] | false | true | true | 1,490 | false | ### Meze Audio Elite headphones on Stable Diffusion
This is the `<meze-elite>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







| 4dfab0aa4846d4032ead8d958713de59 |
lmvasque/readability-es-benchmark-mbert-es-sentences-2class | lmvasque | bert | 12 | 5 | transformers | 0 | text-classification | true | false | false | cc-by-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 6,053 | false |
## Readability benchmark (ES): mbert-es-sentences-2class
This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish".
You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
## Models
Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets.
You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
These are the available models you can use (current model page in bold):
| Model | Granularity | # classes |
|-----------------------------------------------------------------------------------------------------------|----------------|:---------:|
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 |
| **[mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class)** | **sentences** | **2** |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class) | sentences | 3 |
For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
## Results
These are our results for all the readability models in different settings. Please select your model based on the desired performance:
| Granularity | Model | F1 Score (2-class) | Precision (2-class) | Recall (2-class) | F1 Score (3-class) | Precision (3-class) | Recall (3-class) |
|-------------|---------------|:-------------------:|:---------------------:|:------------------:|:--------------------:|:---------------------:|:------------------:|
| Paragraph | Baseline (TF-IDF+LR) | 0.829 | 0.832 | 0.827 | 0.556 | 0.563 | 0.550 |
| Paragraph | BERTIN (Zero) | 0.308 | 0.222 | 0.500 | 0.227 | 0.284 | 0.338 |
| Paragraph | BERTIN (ES) | 0.924 | 0.923 | 0.925 | 0.772 | 0.776 | 0.768 |
| Paragraph | mBERT (Zero) | 0.308 | 0.222 | 0.500 | 0.253 | 0.312 | 0.368 |
| Paragraph | mBERT (EN) | - | - | - | 0.505 | 0.560 | 0.552 |
| Paragraph | mBERT (ES) | **0.933** | **0.932** | **0.936** | 0.776 | 0.777 | 0.778 |
| Paragraph | mBERT (EN+ES) | - | - | - | **0.779** | **0.783** | **0.779** |
| Sentence | Baseline (TF-IDF+LR) | 0.811 | 0.814 | 0.808 | 0.525 | 0.531 | 0.521 |
| Sentence | BERTIN (Zero) | 0.367 | 0.290 | 0.500 | 0.188 | 0.232 | 0.335 |
| Sentence | BERTIN (ES) | **0.900** | **0.900** | **0.900** | **0.699** | **0.701** | **0.698** |
| Sentence | mBERT (Zero) | 0.367 | 0.290 | 0.500 | 0.278 | 0.329 | 0.351 |
| Sentence | mBERT (EN) | - | - | - | 0.521 | 0.565 | 0.539 |
| Sentence | mBERT (ES) | 0.893 | 0.891 | 0.896 | 0.688 | 0.686 | 0.691 |
| Sentence | mBERT (EN+ES) | - | - | - | 0.679 | 0.676 | 0.682 |
## Citation
If you use our results and scripts in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)" (to be published)
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
``` | d26391b21e0052b5152cb2731bacaa25 |
haddadalwi/distilbert-base-uncased-finetuned-squad | haddadalwi | distilbert | 14 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad_v2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,231 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 10 | 5.6821 |
| No log | 2.0 | 20 | 5.5273 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| f3cb0f41a21f4bc09762c8b9abc494de |
Akashpb13/Swahili_xlsr | Akashpb13 | wav2vec2 | 12 | 10 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['sw'] | ['mozilla-foundation/common_voice_8_0'] | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | ['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'sw'] | true | true | true | 2,490 | false |
# Akashpb13/Swahili_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets):
- Loss: 0.159032
- Wer: 0.187934
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Hausa train.tsv and dev.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 80
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 4.810000 | 2.168847 | 0.995747 |
| 1000 | 0.564200 | 0.209411 | 0.303485 |
| 1500 | 0.217700 | 0.153959 | 0.239534 |
| 2000 | 0.150700 | 0.139901 | 0.216327 |
| 2500 | 0.119400 | 0.137543 | 0.208828 |
| 3000 | 0.099500 | 0.140921 | 0.203045 |
| 3500 | 0.087100 | 0.138835 | 0.199649 |
| 4000 | 0.074600 | 0.141297 | 0.195844 |
| 4500 | 0.066600 | 0.148560 | 0.194127 |
| 5000 | 0.060400 | 0.151214 | 0.194388 |
| 5500 | 0.054400 | 0.156072 | 0.192187 |
| 6000 | 0.051100 | 0.154726 | 0.190322 |
| 6500 | 0.048200 | 0.159847 | 0.189538 |
| 7000 | 0.046400 | 0.158727 | 0.188307 |
| 7500 | 0.046500 | 0.159032 | 0.187934 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Swahili_xlsr --dataset mozilla-foundation/common_voice_8_0 --config sw --split test
```
| cae8fbc33ebbdd6ac45fd738f8d481d1 |
apthakur/distilbert-base-uncased-apala-finetuned-emotion | apthakur | distilbert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,346 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-apala-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3696
- Accuracy: 0.476
- F1: 0.4250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 1.3899 | 0.476 | 0.4059 |
| No log | 2.0 | 500 | 1.3696 | 0.476 | 0.4250 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| dc80732139a694da86cd92836e1f3b38 |
DeepESP/gpt2-spanish | DeepESP | gpt2 | 10 | 1,231 | transformers | 16 | text-generation | true | true | true | mit | ['es'] | ['ebooks'] | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | ['GPT-2', 'Spanish', 'ebooks', 'nlg'] | false | true | true | 1,844 | false |
# GPT2-Spanish
GPT2-Spanish is a language generation model trained from scratch with 11.5GB of Spanish texts and with a Byte Pair Encoding (BPE) tokenizer that was trained for this purpose. The parameters used are the same as the small version of the original OpenAI GPT2 model.
## Corpus
This model was trained with a corpus of 11.5GB of texts corresponding to 3.5GB of Wikipedia articles and 8GB of books (narrative, short stories, theater, poetry, essays, and popularization).
## Tokenizer
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257. The inputs are sequences of 1024 consecutive tokens.
This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages.
Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2 models, the tokens "<|talk|>", "<|ax1|>", "<|ax2|>" (..)"<|ax9|>" were included so that they can serve as prompts in future training.
## Training
The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers.
## Authors
The model was trained by Alejandro Oñate Latorre (Spain) and Jorge Ortiz Fuentes (Chile), members of -Deep ESP-, an open-source community on Natural Language Processing in Spanish (https://t.me/joinchat/VoEp1bPrDYEexc6h).
Thanks to the members of the community who collaborated with funding for the initial tests.
## Cautions
The model generates texts according to the patterns learned in the training corpus. These data were not filtered, therefore, the model could generate offensive or discriminatory content.
| bbc8fa097f0abc88783869e6baeaac37 |
ihanif/markhor-goat | ihanif | null | 17 | 40 | diffusers | 0 | text-to-image | true | false | false | creativeml-openrail-m | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal'] | false | true | true | 723 | false |
# DreamBooth model for the markhor concept trained by ihanif on the ihanif/markhor-images dataset.
This is a Stable Diffusion model fine-tuned on the markhor concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of markhor goat**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `goat` images for the animal theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('ihanif/markhor-goat')
image = pipeline().images[0]
image
```
| 7f16ed117aec105f070046b763c8f723 |
fgaim/tielectra-geezswitch | fgaim | electra | 9 | 1 | transformers | 0 | text-classification | true | false | false | cc-by-4.0 | ['ti'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['geezlab'] | true | true | true | 1,153 | false |
# TiELECTRA-GeezSwitch
This model is a fine-tuned version of [fgaim/tielectra-small](https://huggingface.co/fgaim/tielectra-small) on the [GeezSwitch](https://github.com/fgaim/geezswitch-data) dataset.
It achieves the following results on the test set:
- F1: 0.9844
- Recall: 0.9844
- Precision: 0.9845
- Accuracy: 0.9844
- Loss: 0.2190
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- seed: 42
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
### Citation
If you use this model or the GeezSwitch model in your research, please cite as follows:
```markdown
@inproceedings{fgaim2022geezswitch,
title={GeezSwitch: Language Identification in Typologically Related Low-resourced East African Languages},
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference},
year={2022}
}
```
| 420543ae52fb606bf99c7314512dfba0 |
seongwan/ddpm-butterflies-128 | seongwan | null | 13 | 0 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,230 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/seongwan/ddpm-butterflies-128/tensorboard?#scalars)
| 0a80583f4dd7c50dc62892a88e77f6f3 |
Charalampos/whisper-large-el | Charalampos | whisper | 24 | 21 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['el'] | ['common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard'] | true | true | true | 1,560 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-el
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1895
- Wer: 8.9989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0987 | 0.2 | 1000 | 0.1966 | 13.6516 |
| 0.0772 | 0.4 | 2000 | 0.1812 | 12.2771 |
| 0.0398 | 0.6 | 3000 | 0.1734 | 11.3113 |
| 0.0775 | 0.8 | 4000 | 0.1699 | 9.7975 |
| 0.0314 | 1.0 | 5000 | 0.1895 | 8.9989 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| f7910c75c85659329e75f929be6f31da |
ypluit/stt_kr_citrinet1024_PublicCallCenter_1000H_0.26 | ypluit | null | 3 | 47 | nemo | 0 | automatic-speech-recognition | true | false | false | cc-by-4.0 | ['kr'] | ['RealCallData'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'speech', 'audio', 'Citrinet1024', 'NeMo', 'pytorch'] | true | true | true | 1,702 | false |
## Model Overview
<DESCRIBE IN ONE LINE THE MODEL AND ITS USE>
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [1], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.ASRModel.from_pretrained("ypluit/stt_kr_citrinet1024_PublicCallCenter_1000H_0.26")
```
### Transcribing using Python
First, let's get a sample
```
get any korean telephone voice wave file
```
Then simply do:
```
asr_model.transcribe(['sample-kr.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="model" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000Hz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
See nemo toolkit and reference papers.
## Training
Learned about 20 days on 2 A6000
### Datasets
Private call center real data (1200hour)
## Performance
0.26 WER
## Limitations
This model was trained with 1200 hours of Korean telephone voice data for customer service in a call center. might be Poor performance for general-purpose dialogue and specific accents.
## References
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) | 180a3208be49cfb4c005bfe00c746621 |
lixiqi/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-5e-05 | lixiqi | beit | 14 | 1 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['image_folder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,508 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-5e-05
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8610
- Accuracy: 0.6833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1691 | 1.0 | 224 | 0.9764 | 0.6310 |
| 1.0304 | 2.0 | 448 | 0.8965 | 0.6666 |
| 0.9844 | 3.0 | 672 | 0.8610 | 0.6833 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 9fd1c684e75201b8c2b9beac15d9293c |
jonatasgrosman/exp_w2v2t_fr_xls-r_s859 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fr'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'fr'] | false | true | true | 453 | false | # exp_w2v2t_fr_xls-r_s859
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| de458f0d1104141394ccfac20359e92b |
jonatasgrosman/wav2vec2-large-xlsr-53-arabic | jonatasgrosman | wav2vec2 | 8 | 936 | transformers | 3 | automatic-speech-recognition | true | false | true | apache-2.0 | ['ar'] | ['common_voice', 'arabic_speech_corpus'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 7,062 | false |
# Fine-tuned XLSR-53 large model for speech recognition in Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-arabic")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ar"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-arabic"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| ألديك قلم ؟ | ألديك قلم |
| ليست هناك مسافة على هذه الأرض أبعد من يوم أمس. | ليست نالك مسافة على هذه الأرض أبعد من يوم الأمس م |
| إنك تكبر المشكلة. | إنك تكبر المشكلة |
| يرغب أن يلتقي بك. | يرغب أن يلتقي بك |
| إنهم لا يعرفون لماذا حتى. | إنهم لا يعرفون لماذا حتى |
| سيسعدني مساعدتك أي وقت تحب. | سيسئدنيمساعدتك أي وقد تحب |
| أَحَبُّ نظريّة علمية إليّ هي أن حلقات زحل مكونة بالكامل من الأمتعة المفقودة. | أحب نظرية علمية إلي هي أن حل قتزح المكوينا بالكامل من الأمت عن المفقودة |
| سأشتري له قلماً. | سأشتري له قلما |
| أين المشكلة ؟ | أين المشكل |
| وَلِلَّهِ يَسْجُدُ مَا فِي السَّمَاوَاتِ وَمَا فِي الْأَرْضِ مِنْ دَابَّةٍ وَالْمَلَائِكَةُ وَهُمْ لَا يَسْتَكْبِرُونَ | ولله يسجد ما في السماوات وما في الأرض من دابة والملائكة وهم لا يستكبرون |
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ar"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-arabic"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "'", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-14). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| jonatasgrosman/wav2vec2-large-xlsr-53-arabic | **39.59%** | **18.18%** |
| bakrianoo/sinai-voice-ar-stt | 45.30% | 21.84% |
| othrif/wav2vec2-large-xlsr-arabic | 45.93% | 20.51% |
| kmfoda/wav2vec2-large-xlsr-arabic | 54.14% | 26.07% |
| mohammed/wav2vec2-large-xlsr-arabic | 56.11% | 26.79% |
| anas/wav2vec2-large-xlsr-arabic | 62.02% | 27.09% |
| elgeish/wav2vec2-large-xlsr-53-arabic | 100.00% | 100.56% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-arabic,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {A}rabic},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-arabic}},
year={2021}
}
``` | de8331bba3d4a238c67c39b8896754d2 |
AJS50/bert-finetuned-MedicalChunk | AJS50 | bert | 10 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,161 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-MedicalChunk
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1762
- Precision: 0.2723
- Recall: 0.3065
- F1: 0.2884
- Accuracy: 0.9563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 56 | 0.1631 | 0.0 | 0.0 | 0.0 | 0.9606 |
| No log | 2.0 | 112 | 0.1416 | 0.0638 | 0.0302 | 0.0410 | 0.9592 |
| No log | 3.0 | 168 | 0.1405 | 0.1982 | 0.2161 | 0.2067 | 0.9559 |
| No log | 4.0 | 224 | 0.1356 | 0.2771 | 0.2312 | 0.2521 | 0.9633 |
| No log | 5.0 | 280 | 0.1419 | 0.2928 | 0.2663 | 0.2789 | 0.9593 |
| No log | 6.0 | 336 | 0.1550 | 0.2732 | 0.2513 | 0.2618 | 0.9602 |
| No log | 7.0 | 392 | 0.1620 | 0.2732 | 0.2814 | 0.2772 | 0.9578 |
| No log | 8.0 | 448 | 0.1670 | 0.2585 | 0.3065 | 0.2805 | 0.9554 |
| 0.1137 | 9.0 | 504 | 0.1728 | 0.2553 | 0.3015 | 0.2765 | 0.9552 |
| 0.1137 | 10.0 | 560 | 0.1762 | 0.2723 | 0.3065 | 0.2884 | 0.9563 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
| 3456b2a6840c655be1022a19e25420ca |
vantezzen/pankocat | vantezzen | null | 15 | 3 | diffusers | 1 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 610 | false | ### Pnkct1 Dreambooth model trained by vantezzen with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
| 25e1b6fa56f8e4d80a366bdcc7fec7e8 |
henryscheible/eval_masked_102_rte | henryscheible | null | 13 | 0 | null | 0 | null | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,009 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval_masked_102_rte
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7844
- Accuracy: 0.6137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
| 83ee2b22298749a9e1b154c96172506d |
Likalto4/acelerate-butterflies-x64 | Likalto4 | null | 6 | 2 | diffusers | 0 | unconditional-image-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class'] | false | true | true | 556 | false |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
Similar to the other butterfly x64 model, the difference is that this was trained using the accelerate script. The models outputs should be similar, as the only difference is the batch
size.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(Likalto4/acelerate-butterflies-x64)
image = pipeline().images[0]
image
```
| 9d2dbd641dc50dd2213bb8e0aa5832ad |
sd-concepts-library/twitch-league-of-legends | sd-concepts-library | null | 17 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,080 | false | ### Twitch League Of Legends on Stable Diffusion
This is the `<twitch-lol>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:












| 3e8f8588b013ede21bde09209afe75a7 |
spacy/ru_core_news_sm | spacy | null | 28 | 10 | spacy | 1 | token-classification | false | false | false | mit | ['ru'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['spacy', 'token-classification'] | false | true | true | 34,841 | false | ### Details: https://spacy.io/models/ru#ru_core_news_sm
Russian pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer.
| Feature | Description |
| --- | --- |
| **Name** | `ru_core_news_sm` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Components** | `tok2vec`, `morphologizer`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Nerus](https://github.com/natasha/nerus) (Alexander Kukushkin) |
| **License** | `MIT` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (900 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`morphologizer`** | `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Acc\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Degree=Pos\|POS=ADV`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=DET`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=SCONJ`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Acc\|POS=NUM`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=CCONJ`, `Case=Nom\|POS=NUM`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Gender=Masc\|Number=Sing\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Number=Plur\|POS=ADJ\|StyleVariant=Short`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Number=Plur\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Cnd\|POS=SCONJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=Third`, `POS=PART\|Polarity=Neg`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Mid`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=SPACE`, `Case=Nom\|Number=Plur\|POS=DET`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=PRON`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=INTJ`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Nom\|Number=Plur\|POS=PRON`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|StyleVariant=Short`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Gen\|POS=PRON`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET`, `Case=Nom\|POS=PRON`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=First`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|POS=AUX`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=First`, `Case=Gen\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Mid`, `Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET`, `POS=PART`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|StyleVariant=Short`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Aspect=Perf\|Gender=Neut\|Number=Sing\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|POS=NUM`, `Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=PRON\|Person=Third`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Dat\|POS=PRON`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=Third`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|StyleVariant=Short`, `Degree=Cmp\|POS=ADV`, `Aspect=Perf\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Plur\|POS=DET`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=First\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Gender=Fem\|Number=Sing\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Conv\|Voice=Act`, `Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=Second`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET`, `POS=ADV`, `Case=Acc\|POS=PRON`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Ins\|POS=NUM`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Perf\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=PRON`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=Second`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Second\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `POS=SYM`, `Degree=Cmp\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|POS=NUM`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Fem\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Degree=Pos\|POS=ADJ`, `Case=Ins\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=Third`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=PRON`, `Animacy=Anim\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=PUNCT\|StyleVariant=Short`, `Case=Ins\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=SCONJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=First`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Second\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `POS=NOUN`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=Third`, `Degree=Cmp\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Number=Plur\|POS=DET`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Anim\|Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|POS=NUM`, `Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Par\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Gen\|Number=Plur\|POS=DET\|Person=Third`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADV`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|POS=NUM`, `Aspect=Imp\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Anim\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `POS=ADV\|Polarity=Neg`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|POS=NUM`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=First`, `Case=Nom\|Gender=Neut\|POS=NUM`, `Case=Gen\|POS=VERB\|Polarity=Neg`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Second\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Number=Plur\|POS=PRON`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=Third`, `Case=Gen\|Number=Plur\|POS=PRON`, `Aspect=Perf\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `POS=CCONJ\|Polarity=Neg`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=PRON\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv\|Voice=Mid`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=Second`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Second\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Acc\|POS=NUM`, `Aspect=Imp\|Number=Plur\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Loc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=First`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=First`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=Second`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=First`, `Foreign=Yes\|POS=PUNCT`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=PRON\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=Third`, `Case=Dat\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=NUM`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Number=Plur\|POS=DET`, `Aspect=Imp\|POS=AUX\|Tense=Pres\|VerbForm=Conv\|Voice=Act`, `Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|POS=PRON`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `POS=PROPN`, `Aspect=Perf\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Second\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=Second`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Anim\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=First`, `Aspect=Imp\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=Third`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PUNCT`, `Animacy=Anim\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=PRON\|Person=First`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=Second`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=PRON`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Gen\|Number=Plur\|POS=DET`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Ins\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADV`, `Foreign=Yes\|POS=PART`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=First\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=DET`, `Case=Loc\|Gender=Fem\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Conv\|Voice=Mid`, `Aspect=Imp\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PUNCT`, `Animacy=Anim\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=PUNCT`, `Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=ADV`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=First\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=First`, _(truncated: full list in pipeline meta)_ |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `expl`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `iobj`, `list`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `nummod:entity`, `nummod:gov`, `obj`, `obl`, `obl:agent`, `orphan`, `parataxis`, `punct`, `xcomp` |
| **`ner`** | `LOC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.68 |
| `TOKEN_P` | 97.28 |
| `TOKEN_R` | 98.31 |
| `TOKEN_F` | 97.79 |
| `POS_ACC` | 98.77 |
| `MORPH_ACC` | 97.03 |
| `MORPH_MICRO_P` | 98.68 |
| `MORPH_MICRO_R` | 97.98 |
| `MORPH_MICRO_F` | 98.33 |
| `SENTS_P` | 99.89 |
| `SENTS_R` | 99.89 |
| `SENTS_F` | 99.89 |
| `DEP_UAS` | 95.87 |
| `DEP_LAS` | 94.62 |
| `TAG_ACC` | 98.77 |
| `LEMMA_ACC` | 0.00 |
| `ENTS_P` | 94.88 |
| `ENTS_R` | 95.09 |
| `ENTS_F` | 94.98 | | 2227f0e9f5fe3d96b5565dae7bb800e6 |
din0s/t5-base-finetuned-en-to-it-hrs | din0s | t5 | 10 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,074 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-en-to-it-hrs
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4678
- Bleu: 22.3501
- Gen Len: 50.294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.4526 | 1.0 | 1125 | 1.9406 | 11.7289 | 57.5773 |
| 1.2548 | 2.0 | 2250 | 1.8509 | 14.9652 | 53.1013 |
| 1.1458 | 3.0 | 3375 | 1.7841 | 16.7549 | 52.4607 |
| 1.048 | 4.0 | 4500 | 1.7393 | 18.0223 | 51.4573 |
| 0.9922 | 5.0 | 5625 | 1.6980 | 18.6182 | 51.4733 |
| 0.9691 | 6.0 | 6750 | 1.6702 | 19.1118 | 51.994 |
| 0.9382 | 7.0 | 7875 | 1.6493 | 19.9025 | 51.128 |
| 0.8995 | 8.0 | 9000 | 1.6272 | 20.2594 | 51.2807 |
| 0.8843 | 9.0 | 10125 | 1.6106 | 20.4571 | 50.9607 |
| 0.8634 | 10.0 | 11250 | 1.5819 | 20.6829 | 51.0007 |
| 0.8507 | 11.0 | 12375 | 1.5752 | 20.6869 | 51.46 |
| 0.824 | 12.0 | 13500 | 1.5612 | 20.8633 | 51.2387 |
| 0.8124 | 13.0 | 14625 | 1.5496 | 21.3232 | 50.684 |
| 0.8081 | 14.0 | 15750 | 1.5425 | 21.4131 | 50.544 |
| 0.7837 | 15.0 | 16875 | 1.5302 | 21.2258 | 51.0287 |
| 0.7752 | 16.0 | 18000 | 1.5244 | 21.6548 | 50.312 |
| 0.7698 | 17.0 | 19125 | 1.5197 | 21.6719 | 50.7993 |
| 0.7606 | 18.0 | 20250 | 1.5168 | 21.7322 | 50.5947 |
| 0.7527 | 19.0 | 21375 | 1.5128 | 21.8434 | 50.4273 |
| 0.7515 | 20.0 | 22500 | 1.5008 | 21.6784 | 50.4933 |
| 0.7436 | 21.0 | 23625 | 1.5010 | 21.955 | 50.2093 |
| 0.7307 | 22.0 | 24750 | 1.4976 | 21.9676 | 50.7 |
| 0.7311 | 23.0 | 25875 | 1.4919 | 22.1018 | 50.5687 |
| 0.7206 | 24.0 | 27000 | 1.4890 | 22.0666 | 50.198 |
| 0.7142 | 25.0 | 28125 | 1.4843 | 22.1885 | 50.312 |
| 0.7125 | 26.0 | 29250 | 1.4796 | 22.1068 | 50.3167 |
| 0.7069 | 27.0 | 30375 | 1.4843 | 22.2135 | 50.144 |
| 0.701 | 28.0 | 31500 | 1.4761 | 22.168 | 50.574 |
| 0.6968 | 29.0 | 32625 | 1.4777 | 22.1219 | 50.5933 |
| 0.704 | 30.0 | 33750 | 1.4745 | 22.179 | 50.4773 |
| 0.698 | 31.0 | 34875 | 1.4733 | 22.1779 | 50.3713 |
| 0.6816 | 32.0 | 36000 | 1.4756 | 22.3355 | 50.3967 |
| 0.681 | 33.0 | 37125 | 1.4713 | 22.3124 | 50.192 |
| 0.6896 | 34.0 | 38250 | 1.4701 | 22.2848 | 50.1133 |
| 0.6798 | 35.0 | 39375 | 1.4677 | 22.2537 | 50.1573 |
| 0.6908 | 36.0 | 40500 | 1.4686 | 22.2789 | 50.202 |
| 0.6765 | 37.0 | 41625 | 1.4687 | 22.2854 | 50.1687 |
| 0.679 | 38.0 | 42750 | 1.4675 | 22.3388 | 50.3127 |
| 0.6788 | 39.0 | 43875 | 1.4672 | 22.2971 | 50.2687 |
| 0.6744 | 40.0 | 45000 | 1.4678 | 22.3501 | 50.294 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.11.0
| e77237c115ec3580a7574fc009e035a1 |
Helsinki-NLP/opus-mt-sv-cs | Helsinki-NLP | marian | 10 | 12 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-sv-cs
* source languages: sv
* target languages: cs
* OPUS readme: [sv-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-cs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-cs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-cs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-cs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.cs | 27.5 | 0.488 |
| 848cbaafb7b50e749784eeb39aa889a6 |
ArafatBHossain/distiled_flip_model_emotion_alpha_0.8_v1 | ArafatBHossain | distilbert | 10 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,451 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distiled_flip_model_emotion_alpha_0.8_v1
This model is a fine-tuned version of [ArafatBHossain/distill_bert_fine_tuned_emotion_dataset](https://huggingface.co/ArafatBHossain/distill_bert_fine_tuned_emotion_dataset) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1615
- Accuracy: 0.9425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1921 | 1.0 | 2000 | 0.2402 | 0.933 |
| 0.129 | 2.0 | 4000 | 0.1789 | 0.94 |
| 0.0869 | 3.0 | 6000 | 0.1615 | 0.9425 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0
- Datasets 2.6.1
- Tokenizers 0.12.1
| 7242b7cc034e6d98bf51f5e33f9c8193 |
brianpaiva/bert-base-squad-v2-portuguese | brianpaiva | bert | 12 | 5 | transformers | 0 | question-answering | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,541 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-squad-v2-portuguese
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3221 | 1.0 | 654 | 1.5568 |
| 1.4383 | 2.0 | 1308 | 1.5087 |
| 1.1511 | 3.0 | 1962 | 1.6465 |
| 0.5951 | 4.0 | 2616 | 1.7428 |
| 0.4432 | 5.0 | 3270 | 2.1401 |
| 0.3554 | 6.0 | 3924 | 2.3629 |
| 0.2195 | 7.0 | 4578 | 2.5445 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 044867bd08dfaf71fd19773bdb04a692 |
sd-dreambooth-library/ricky-fort | sd-dreambooth-library | null | 25 | 3 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 1 | 1 | 0 | [] | false | true | true | 1,586 | false | ### ricky fort on Stable Diffusion via Dreambooth
#### model by machinelearnear
This your the Stable Diffusion model fine-tuned the ricky fort concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks ricky fort**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:







| de668ecd9acd8edbecc84f577ac1692a |
sd-concepts-library/glow-forest | sd-concepts-library | null | 10 | 0 | null | 14 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,148 | false | ### glow forest on Stable Diffusion
This is the `<dark-forest>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





| 16045b47fb466654349f145f4a92e498 |
Nadav/bert-base-historic-multilingual-64k-td-cased-squad-en | Nadav | bert | 10 | 7 | transformers | 0 | question-answering | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,328 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-historic-multilingual-64k-td-cased-squad-en
This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-64k-td-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-64k-td-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9315 | 1.0 | 4659 | 1.7399 |
| 1.5775 | 2.0 | 9318 | 1.5474 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
| eea5f05c3ccc087cb1a796bc806d5d62 |
JeremiahZ/bert-base-uncased-sst2 | JeremiahZ | bert | 17 | 135 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,334 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2478
- Accuracy: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1668 | 1.0 | 2105 | 0.2513 | 0.9174 |
| 0.1119 | 2.0 | 4210 | 0.2478 | 0.9323 |
| 0.0699 | 3.0 | 6315 | 0.2764 | 0.9266 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 2f69fea6dae90f3db4c3a70c87b3b339 |
Helsinki-NLP/opus-mt-es-wls | Helsinki-NLP | marian | 10 | 13 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-es-wls
* source languages: es
* target languages: wls
* OPUS readme: [es-wls](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-wls/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-wls/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-wls/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-wls/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.wls | 22.9 | 0.437 |
| c7145ffb24cea781f4c8613a052059a1 |
Helsinki-NLP/opus-mt-tvl-en | Helsinki-NLP | marian | 10 | 10 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-tvl-en
* source languages: tvl
* target languages: en
* OPUS readme: [tvl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tvl.en | 37.3 | 0.528 |
| 814d0546ab6f28505adbfa546096f5e5 |
apple/coreml-stable-diffusion-v1-4 | apple | null | 87 | 0 | null | 9 | text-to-image | false | false | false | other | null | null | null | 2 | 0 | 2 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'core-ml'] | false | true | true | 12,163 | false |
# Stable Diffusion v1-4 Model Card
This model was generated by Hugging Face using [Apple’s repository](https://github.com/apple/ml-stable-diffusion) which has [ASCL](https://github.com/apple/ml-stable-diffusion/blob/main/LICENSE.md).
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-4** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 225k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
These weights here have been converted to Core ML for use on Apple Silicon hardware.
There are 4 variants of the Core ML weights:
```
coreml-stable-diffusion-v1-4
├── original
│ ├── compiled # Swift inference, "original" attention
│ └── packages # Python inference, "original" attention
└── split_einsum
├── compiled # Swift inference, "split_einsum" attention
└── packages # Python inference, "split_einsum" attention
```
Please, refer to https://huggingface.co/blog/diffusers-coreml for details.
If you need weights for the 🧨 Diffusers library, please [visit this model instead](https://huggingface.co/CompVis/stable-diffusion-v1-4).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide four checkpoints, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* | d046f0a50324bc22b62201234fac6fe5 |
meghazisofiane/opus-mt-en-ar-evaluated-en-to-ar-1000instances-un_multi-leaningRate2e-05-batchSize8-11-action-1 | meghazisofiane | marian | 15 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['un_multi'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,932 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-evaluated-en-to-ar-1000instances-un_multi-leaningRate2e-05-batchSize8-11-action-1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the un_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1294
- Bleu: 64.0048
- Meteor: 0.4903
- Gen Len: 21.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 0.0489 | 1.0 | 100 | 0.1287 | 63.7573 | 0.4877 | 21.79 |
| 0.0447 | 2.0 | 200 | 0.1293 | 63.8776 | 0.49 | 21.875 |
| 0.0442 | 3.0 | 300 | 0.1294 | 64.0048 | 0.4903 | 21.85 |
| 0.0433 | 4.0 | 400 | 0.1294 | 64.0048 | 0.4903 | 21.85 |
| 0.0429 | 5.0 | 500 | 0.1294 | 64.0048 | 0.4903 | 21.85 |
| 0.0435 | 6.0 | 600 | 0.1294 | 64.0048 | 0.4903 | 21.85 |
| 0.0429 | 7.0 | 700 | 0.1294 | 64.0048 | 0.4903 | 21.85 |
| 0.0426 | 8.0 | 800 | 0.1294 | 64.0048 | 0.4903 | 21.85 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 48375d4f762ca5a57f356ed46b77009b |
jonatasgrosman/exp_w2v2t_pl_vp-fr_s807 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pl'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'pl'] | false | true | true | 469 | false | # exp_w2v2t_pl_vp-fr_s807
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 03d93a2cc492aeb7b6d61db4cc7c2701 |
anwarvic/distilbert-base-uncased-for-fakenews | anwarvic | distilbert | 6 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert'] | false | true | true | 679 | false |
# DistilBERT (uncased) for FaceNews Classification
This model is a classification model built by fine-tuning
[DistilBERT base model](https://huggingface.co/distilbert-base-uncased).
This model was trained using
[fake-and-real-news-dataset](https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset)
for five epochs.
> **NOTE:**
This model is just a POC (proof-of-concept) for a fellowship I was applying for.
## Intended uses & limitations
Note that this model is primarily aimed at classifying an article to either
"Fake" or "Real".
### How to use
Check this [notebook](https://www.kaggle.com/code/mohamedanwarvic/fakenewsclassifier-fatima-fellowship) on Kaggle. | 62591c90200f95ed5b19b255d6461eb7 |
timm/maxvit_rmlp_nano_rw_256.sw_in1k | timm | null | 4 | 69 | timm | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagenet-1k'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'timm'] | false | true | true | 22,223 | false | # Model card for maxvit_rmlp_nano_rw_256.sw_in1k
A timm specific MaxViT (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
### Model Variants in [maxxvit.py](https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 15.5
- GMACs: 4.5
- Activations (M): 31.9
- Image size: 256 x 256
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('maxvit_rmlp_nano_rw_256.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'maxvit_rmlp_nano_rw_256.sw_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 192, 192])
# torch.Size([1, 128, 96, 96])
# torch.Size([1, 256, 48, 48])
# torch.Size([1, 512, 24, 24])
# torch.Size([1, 1024, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'maxvit_rmlp_nano_rw_256.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
| 30b5c68862c220787e8ac38ea68fcd07 |
steveabecassis/t5-base-finetuned-xsum | steveabecassis | t5 | 10 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,173 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-xsum
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 21 | 0.3747 | 0.7975 | 0.7421 | 0.7924 | 0.7932 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.13.2
| e8c976c5960de771adccbb242621fa5d |
Gladiator/bert-large-uncased_ner_wnut_17 | Gladiator | bert | 12 | 9 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['wnut_17'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,714 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased_ner_wnut_17
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2516
- Precision: 0.7053
- Recall: 0.5754
- F1: 0.6337
- Accuracy: 0.9603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2143 | 0.6353 | 0.4605 | 0.5340 | 0.9490 |
| No log | 2.0 | 426 | 0.2299 | 0.7322 | 0.5036 | 0.5967 | 0.9556 |
| 0.1489 | 3.0 | 639 | 0.2137 | 0.6583 | 0.5945 | 0.6248 | 0.9603 |
| 0.1489 | 4.0 | 852 | 0.2494 | 0.7035 | 0.5789 | 0.6352 | 0.9604 |
| 0.0268 | 5.0 | 1065 | 0.2516 | 0.7053 | 0.5754 | 0.6337 | 0.9603 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| ef36e7ac96a011c2e20dd1d057eb155f |
mcurmei/single_label_N_max_long_training | mcurmei | distilbert | 12 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,630 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# single_label_N_max_long_training
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0568 | 1.0 | 674 | 1.9993 |
| 1.6024 | 2.0 | 1348 | 1.8497 |
| 1.0196 | 3.0 | 2022 | 1.9178 |
| 0.7622 | 4.0 | 2696 | 2.0412 |
| 0.6066 | 5.0 | 3370 | 2.2523 |
| 0.4136 | 6.0 | 4044 | 2.3845 |
| 0.3113 | 7.0 | 4718 | 2.5712 |
| 0.2777 | 8.0 | 5392 | 2.6790 |
| 0.208 | 9.0 | 6066 | 2.7464 |
| 0.1749 | 10.0 | 6740 | 2.8288 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| c72f23175d8925c6938db30ddbbb5b12 |
WoodRoof/shanghai | WoodRoof | null | 19 | 4 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 418 | false | ### Shanghai Dreambooth model trained by WoodRoof with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 5c1adffb05178b41862565e1dbe40e23 |
kurama/bert-finetuned-ner | kurama | bert | 12 | 3 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Precision: 0.9322
- Recall: 0.9485
- F1: 0.9403
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0831 | 1.0 | 1756 | 0.0652 | 0.9213 | 0.9392 | 0.9302 | 0.9835 |
| 0.0413 | 2.0 | 3512 | 0.0567 | 0.9292 | 0.9495 | 0.9392 | 0.9861 |
| 0.0192 | 3.0 | 5268 | 0.0617 | 0.9322 | 0.9485 | 0.9403 | 0.9860 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 8fdcd61e755363cc8c99bf419761c358 |
sd-concepts-library/mikako-method | sd-concepts-library | null | 12 | 0 | null | 3 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 636 | false | ### mikako-method on Stable Diffusion
This is the `<m-m>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:

| 4b89b5948dfe9ac7aa0e7dcec3265623 |
dshvadskiy/bert-finetuned-ner | dshvadskiy | bert | 12 | 3 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2002'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,519 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2002 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1458
- Precision: 0.7394
- Recall: 0.7884
- F1: 0.7631
- Accuracy: 0.9656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1047 | 1.0 | 1041 | 0.1516 | 0.7173 | 0.7505 | 0.7335 | 0.9602 |
| 0.068 | 2.0 | 2082 | 0.1280 | 0.7470 | 0.7888 | 0.7673 | 0.9664 |
| 0.0406 | 3.0 | 3123 | 0.1458 | 0.7394 | 0.7884 | 0.7631 | 0.9656 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 21e7218bc94b555d911f701cb3afb413 |
tuwonga/zukki_style | tuwonga | null | 22 | 25 | diffusers | 20 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 4 | 0 | 4 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 2,460 | false | ### zukki_style
This is a fine-tuned Stable Diffusion model (based on v1.5) trained on screenshots from **Ma vie de courgette** stop motion animation movie.Use the token **_zukki_style_** in your prompts to use the style.
_Download the ckpt file from "files and versions" tab into the stable diffusion models folder of your web-ui of choice._
_Actually this is an experimental model because mainly is possible to render charcters instead of scene/landscapes but I've found more interesting the output in the img2img than txt2img. You can see the results in the second and third pic (original/img2img/img2img). I think would be better to check the "restore faces" option and play around with denoising strength._
--
**Characters rendered with this model:**

_prompt and settings used: **[person] in zukki_style** | **Steps: 30, Sampler: Euler, CFG scale: 7.5**_
--
**Characters rendered with img2img:**

_prompt and settings used: **[person] in zukki_style** | **Steps: 30 - you can play around with settings**_
--
**Characters rendered with img2img:**

_prompt and settings used: **[person] in zukki_style** | **Steps: 30 - you can play around with settings**_
--
This model was trained with Dreambooth training by TheLastBen, using 32 images at 6400 steps with 25% of text encoder.
--
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | a5b0bcf005df50346db1cd2397ca4a3b |
ejcho623/shoe2 | ejcho623 | null | 22 | 5 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,173 | false | ### Shoe2 on Stable Diffusion via Dreambooth
#### model by ejcho623
This your the Stable Diffusion model fine-tuned the Shoe2 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a sks shoe**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




| f1d19dd80c1153333e7fdf55f7b91603 |
nickmuchi/vit-base-beans | nickmuchi | vit | 14 | 7 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['beans'] | null | 2 | 0 | 2 | 0 | 0 | 0 | 0 | ['image-classification', 'generated_from_trainer'] | true | true | true | 1,512 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0505
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1166 | 1.54 | 100 | 0.0764 | 0.9850 |
| 0.1607 | 3.08 | 200 | 0.2114 | 0.9398 |
| 0.0067 | 4.62 | 300 | 0.0692 | 0.9774 |
| 0.005 | 6.15 | 400 | 0.0944 | 0.9624 |
| 0.0043 | 7.69 | 500 | 0.0505 | 0.9850 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 5f2a7a77733e11bcf3751253d094bc06 |
roseman/whisper-medium-ckb | roseman | whisper | 18 | 29 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,581 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-ckb
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2977
- Wer: 31.3503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1833
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.27 | 1.03 | 366 | 0.2722 | 43.9177 |
| 0.1498 | 2.07 | 732 | 0.2354 | 36.2845 |
| 0.0538 | 3.1 | 1098 | 0.2422 | 33.6539 |
| 0.0168 | 4.14 | 1464 | 0.2717 | 32.3842 |
| 0.0025 | 6.0 | 1830 | 0.2977 | 31.3503 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| 068905c98020c79afb1899b5e31f564a |
stanfordnlp/stanza-ga | stanfordnlp | null | 8 | 72 | stanza | 0 | token-classification | false | false | false | apache-2.0 | ['ga'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stanza', 'token-classification'] | false | true | true | 578 | false | # Stanza model for Irish (ga)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2022-09-25 01:23:16.709
| 646e589b75af4614cf2a937865547d1f |
jonatasgrosman/exp_w2v2t_id_unispeech-sat_s477 | jonatasgrosman | unispeech-sat | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['id'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'id'] | false | true | true | 463 | false | # exp_w2v2t_id_unispeech-sat_s477
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 9cfb49fc7e0d50c8582341d35855a4b0 |
iewaij/roberta-base-lm | iewaij | roberta | 11 | 6 | transformers | 0 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,917 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-lm-all
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2966 | 1.0 | 1194 | 1.0711 |
| 1.0858 | 2.0 | 2388 | 0.9740 |
| 1.0055 | 3.0 | 3582 | 0.9273 |
| 0.9301 | 4.0 | 4776 | 0.8784 |
| 0.9021 | 5.0 | 5970 | 0.8731 |
| 0.8479 | 6.0 | 7164 | 0.8406 |
| 0.8142 | 7.0 | 8358 | 0.8172 |
| 0.7858 | 8.0 | 9552 | 0.8158 |
| 0.7529 | 9.0 | 10746 | 0.7922 |
| 0.7189 | 10.0 | 11940 | 0.7855 |
| 0.7032 | 11.0 | 13134 | 0.7761 |
| 0.6795 | 12.0 | 14328 | 0.7549 |
| 0.6673 | 13.0 | 15522 | 0.7277 |
| 0.6412 | 14.0 | 16716 | 0.7121 |
| 0.6321 | 15.0 | 17910 | 0.7168 |
| 0.6198 | 16.0 | 19104 | 0.7109 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 41db097305a96fb7b9a16a3963187b60 |
MultiBertGunjanPatrick/multiberts-seed-2-1000k | MultiBertGunjanPatrick | bert | 7 | 2 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert', 'multiberts', 'multiberts-seed-2'] | false | true | true | 6,487 | false | # MultiBERTs Seed 2 Checkpoint 1000k (uncased)
Seed 2 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1000k')
model = BertModel.from_pretrained("multiberts-seed-2-1000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| e7d3ea70fcbc6eb2a438f50d1be99831 |
jonatasgrosman/exp_w2v2t_ru_vp-es_s729 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ru'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'ru'] | false | true | true | 469 | false | # exp_w2v2t_ru_vp-es_s729
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| ca98b6719dfcd6eae439532201a4e528 |
google/t5-efficient-base-el16 | google | t5 | 12 | 8 | transformers | 0 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,253 | false |
# T5-Efficient-BASE-EL16 (Deep-Narrow version)
T5-Efficient-BASE-EL16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-el16** - is of model type **Base** with the following variations:
- **el** is **16**
It has **251.25** million parameters and thus requires *ca.* **1005.01 MB** of memory in full precision (*fp32*)
or **502.51 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | 07da06646d4de422cd1bc5d9f5bbcaac |
google/t5-small-lm-adapt | google | t5 | 10 | 11,818 | transformers | 5 | text2text-generation | true | true | false | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['t5-lm-adapt'] | false | true | true | 3,130 | false |
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 - LM-Adapted
## Version 1.1 - LM-Adapted
[T5 Version 1.1 - LM Adapted](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) includes the following improvements compared to the original [T5 model](https://huggingface.co/t5-small):
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202).
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
- Pre-trained on C4 only without mixing in the downstream tasks.
- no parameter sharing between embedding and classifier layer
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`.
and is pretrained on both the denoising and language modeling objective.
More specifically, this checkpoint is initialized from [T5 Version 1.1 - Small](https://huggingface.co/google/https://huggingface.co/google/t5-v1_1-small)
and then trained for an additional 100K steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf).
This adaptation improves the ability of the model to be used for prompt tuning.
**Note**: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is [BigScience's T0pp](https://huggingface.co/bigscience/T0pp).
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?other=t5-lm-adapt)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

| fb7fb5decfe5965fc54ec9b4f9d57d7d |
Stricky/BelleDelphine-Person-Dreambooth | Stricky | null | 2 | 0 | null | 0 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,066 | false | # Belle Delphine Model [Dreambooth]
Dreambooth model trained on images of [Belle Delphine](https://www.instagram.com/belle.delphine).
[](https://www.buymeacoffee.com/stricky)
### Settings
```
Token: belle delphine
Class: person
Steps: 2000
Training image count: 20
Regularization images: https://github.com/djbielejeski/Stable-Diffusion-Regularization-Images-person_ddim
Regularization image count: 1000
```
### Comparisons
- Steps/CFG scale

- Sampler/CFG scale

### Sample images

### Sample output

| 037d2239b3ed3259a1194f6bab22e439 |
madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1 | madlag | bert | 166 | 45 | transformers | 1 | question-answering | true | true | false | mit | ['en'] | ['squad_v2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering'] | false | true | true | 4,373 | false |
## bert-large-uncased-whole-word-masking model fine-tuned on SQuAD v2
This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 16.0%** of the original weights.
The model contains **24.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method).
With a simple resizing of the linear matrices it ran **2.63x as fast as bert-large-uncased-whole-word-masking** on the evaluation.
This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix.
<div class="graph"><script src="/madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1/raw/main/model_card/density_info.js" id="0e65059e-a61d-4561-947e-b8f47b818bb8"></script></div>
In terms of accuracy, its **F1 is 82.57**, compared with 85.85 for bert-large-uncased-whole-word-masking, a **F1 drop of 3.28**.
## Fine-Pruning details
This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-large-uncased-whole-word-masking) checkpoint on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2](https://huggingface.co/madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2).
This model is case-insensitive: it does not make a difference between english and English.
A side-effect of the block pruning is that some of the attention heads are completely removed: 190 heads were removed on a total of 384 (49.5%).
Here is a detailed view on how the remaining heads are distributed in the network after pruning.
<div class="graph"><script src="/madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1/raw/main/model_card/pruning_info.js" id="f7ae9ec9-d050-46d0-b237-3025165e9504"></script></div>
## Details of the SQuAD1.1 dataset
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD 2.0 | train | 130.0K |
| SQuAD 2.0 | eval | 11.9k |
### Fine-tuning
- Python: `3.8.5`
- Machine specs:
```CPU: Intel(R) Core(TM) i7-6700K CPU
Memory: 64 GiB
GPUs: 1 GeForce GTX 3090, with 24GiB memory
GPU driver: 455.23.05, CUDA: 11.1
```
### Results
**Pytorch model file size**: `1084MB` (original BERT: `1228.0MB`)
| Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation |
| ------ | --------- | --------- | --------- |
| **EM** | **79.70** | **82.83** | **-4.13**|
| **F1** | **82.57** | **85.85** | **-3.28**|
```
{
"HasAns_exact": 74.8144399460189,
"HasAns_f1": 80.555306012496,
"HasAns_total": 5928,
"NoAns_exact": 84.57527333894029,
"NoAns_f1": 84.57527333894029,
"NoAns_total": 5945,
"best_exact": 79.70184452118251,
"best_exact_thresh": 0.0,
"best_f1": 82.56816761071966,
"best_f1_thresh": 0.0,
"exact": 79.70184452118251,
"f1": 82.56816761071981,
"total": 11873
}
```
## Example Usage
Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns.
`pip install nn_pruning`
Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded.
```python
from transformers import pipeline
from nn_pruning.inference_model_patcher import optimize_model
qa_pipeline = pipeline(
"question-answering",
model="madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1",
tokenizer="madlag/bert-large-uncased-wwm-squadv2-x2.63-f82.6-d16-hybrid-v1"
)
print("bert-large-uncased-whole-word-masking parameters: 445.0M")
print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M")
qa_pipeline.model = optimize_model(qa_pipeline.model, "dense")
print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M")
predictions = qa_pipeline({
'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.",
'question': "Who is Frederic Chopin?",
})
print("Predictions", predictions)
``` | b66dcbb991cd12d6c86dc8b26f9d1181 |
NbAiLab/wav2vec2-1b-nst | NbAiLab | wav2vec2 | 51 | 8 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'NbAiLab/NST', 'generated_from_trainer'] | true | true | true | 61,191 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-1b-nst
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the NBAILAB/NST - NO-CLOSE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1921
- Wer: 0.0469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 3.3367 | 0.04 | 500 | 3.4163 | 0.9938 |
| 2.9738 | 0.08 | 1000 | 2.9497 | 0.9950 |
| 2.8161 | 0.12 | 1500 | 2.7508 | 1.0139 |
| 1.6716 | 0.17 | 2000 | 1.2780 | 0.9512 |
| 0.4444 | 0.21 | 2500 | 0.2886 | 0.3434 |
| 0.4584 | 0.25 | 3000 | 0.2339 | 0.2520 |
| 0.3229 | 0.29 | 3500 | 0.2077 | 0.2047 |
| 0.3201 | 0.33 | 4000 | 0.1753 | 0.1647 |
| 0.3029 | 0.37 | 4500 | 0.2083 | 0.1939 |
| 0.2539 | 0.41 | 5000 | 0.1839 | 0.1389 |
| 0.2613 | 0.46 | 5500 | 0.1561 | 0.1328 |
| 0.2288 | 0.5 | 6000 | 0.1552 | 0.1404 |
| 0.2261 | 0.54 | 6500 | 0.1471 | 0.1172 |
| 0.1993 | 0.58 | 7000 | 0.1663 | 0.1344 |
| 0.2082 | 0.62 | 7500 | 0.1424 | 0.1086 |
| 0.1976 | 0.66 | 8000 | 0.1395 | 0.1102 |
| 0.1854 | 0.7 | 8500 | 0.1331 | 0.1184 |
| 0.1855 | 0.75 | 9000 | 0.1306 | 0.1049 |
| 0.1898 | 0.79 | 9500 | 0.1563 | 0.1053 |
| 0.1874 | 0.83 | 10000 | 0.1361 | 0.1011 |
| 0.1693 | 0.87 | 10500 | 0.1317 | 0.0975 |
| 0.1731 | 0.91 | 11000 | 0.1552 | 0.1119 |
| 0.1739 | 0.95 | 11500 | 0.1256 | 0.0863 |
| 0.1621 | 0.99 | 12000 | 0.1359 | 0.0890 |
| 0.1561 | 1.03 | 12500 | 0.1252 | 0.0833 |
| 0.161 | 1.08 | 13000 | 0.1243 | 0.0802 |
| 0.1512 | 1.12 | 13500 | 0.1287 | 0.0829 |
| 0.145 | 1.16 | 14000 | 0.1273 | 0.0809 |
| 0.1452 | 1.2 | 14500 | 0.1277 | 0.0797 |
| 0.1506 | 1.24 | 15000 | 0.1358 | 0.0795 |
| 0.1463 | 1.28 | 15500 | 0.1276 | 0.0774 |
| 0.1361 | 1.32 | 16000 | 0.1206 | 0.0789 |
| 0.1405 | 1.37 | 16500 | 0.1303 | 0.0791 |
| 0.1439 | 1.41 | 17000 | 0.1319 | 0.0804 |
| 0.1429 | 1.45 | 17500 | 0.1179 | 0.0768 |
| 0.1299 | 1.49 | 18000 | 0.1380 | 0.0812 |
| 0.1423 | 1.53 | 18500 | 0.1228 | 0.0768 |
| 0.1432 | 1.57 | 19000 | 0.1221 | 0.0774 |
| 0.1261 | 1.61 | 19500 | 0.1309 | 0.0781 |
| 0.1338 | 1.66 | 20000 | 0.1332 | 0.0761 |
| 0.1345 | 1.7 | 20500 | 0.1198 | 0.0770 |
| 0.1312 | 1.74 | 21000 | 0.1226 | 0.0746 |
| 0.1269 | 1.78 | 21500 | 0.1312 | 0.0759 |
| 0.1322 | 1.82 | 22000 | 0.1251 | 0.0751 |
| 0.1329 | 1.86 | 22500 | 0.1192 | 0.0715 |
| 0.1423 | 1.9 | 23000 | 0.1114 | 0.0718 |
| 0.1223 | 1.95 | 23500 | 0.1277 | 0.0733 |
| 0.1278 | 1.99 | 24000 | 0.1287 | 0.0751 |
| 0.1128 | 2.03 | 24500 | 0.1256 | 0.0702 |
| 0.112 | 2.07 | 25000 | 0.1258 | 0.0711 |
| 0.115 | 2.11 | 25500 | 0.1271 | 0.0718 |
| 0.1131 | 2.15 | 26000 | 0.1223 | 0.0717 |
| 0.1175 | 2.19 | 26500 | 0.1229 | 0.0725 |
| 0.1156 | 2.24 | 27000 | 0.1235 | 0.0719 |
| 0.104 | 2.28 | 27500 | 0.1273 | 0.0698 |
| 0.1201 | 2.32 | 28000 | 0.1236 | 0.0710 |
| 0.1182 | 2.36 | 28500 | 0.1191 | 0.0691 |
| 0.123 | 2.4 | 29000 | 0.1221 | 0.0706 |
| 0.1244 | 2.44 | 29500 | 0.1154 | 0.0718 |
| 0.1143 | 2.48 | 30000 | 0.1089 | 0.0693 |
| 0.1137 | 2.52 | 30500 | 0.1360 | 0.0762 |
| 0.1056 | 2.57 | 31000 | 0.1302 | 0.0709 |
| 0.1038 | 2.61 | 31500 | 0.1170 | 0.0690 |
| 0.1095 | 2.65 | 32000 | 0.1148 | 0.0680 |
| 0.1109 | 2.69 | 32500 | 0.1213 | 0.0714 |
| 0.1029 | 2.73 | 33000 | 0.1145 | 0.0680 |
| 0.1089 | 2.77 | 33500 | 0.1274 | 0.0701 |
| 0.1015 | 2.81 | 34000 | 0.1274 | 0.0703 |
| 0.1043 | 2.86 | 34500 | 0.1250 | 0.0705 |
| 0.1307 | 2.9 | 35000 | 0.1275 | 0.0721 |
| 0.1003 | 2.94 | 35500 | 0.1163 | 0.0656 |
| 0.0945 | 2.98 | 36000 | 0.1144 | 0.0673 |
| 0.0886 | 3.02 | 36500 | 0.1190 | 0.0680 |
| 0.0944 | 3.06 | 37000 | 0.1265 | 0.0667 |
| 0.105 | 3.1 | 37500 | 0.1202 | 0.0715 |
| 0.0989 | 3.15 | 38000 | 0.1216 | 0.0682 |
| 0.1013 | 3.19 | 38500 | 0.1253 | 0.0704 |
| 0.1061 | 3.23 | 39000 | 0.1274 | 0.0702 |
| 0.1006 | 3.27 | 39500 | 0.1165 | 0.0705 |
| 0.0976 | 3.31 | 40000 | 0.1210 | 0.0684 |
| 0.094 | 3.35 | 40500 | 0.1243 | 0.0703 |
| 0.0918 | 3.39 | 41000 | 0.1272 | 0.0694 |
| 0.0981 | 3.44 | 41500 | 0.1342 | 0.0726 |
| 0.0919 | 3.48 | 42000 | 0.1268 | 0.0717 |
| 0.0987 | 3.52 | 42500 | 0.1159 | 0.0677 |
| 0.095 | 3.56 | 43000 | 0.1174 | 0.0673 |
| 0.0955 | 3.6 | 43500 | 0.1227 | 0.0667 |
| 0.0925 | 3.64 | 44000 | 0.1232 | 0.0679 |
| 0.1099 | 3.68 | 44500 | 0.1257 | 0.0666 |
| 0.0908 | 3.73 | 45000 | 0.1198 | 0.0660 |
| 0.0923 | 3.77 | 45500 | 0.1301 | 0.0694 |
| 0.0898 | 3.81 | 46000 | 0.1217 | 0.0684 |
| 0.095 | 3.85 | 46500 | 0.1202 | 0.0681 |
| 0.0925 | 3.89 | 47000 | 0.1255 | 0.0675 |
| 0.0948 | 3.93 | 47500 | 0.1176 | 0.0665 |
| 0.0905 | 3.97 | 48000 | 0.1166 | 0.0646 |
| 0.0856 | 4.01 | 48500 | 0.1217 | 0.0644 |
| 0.0864 | 4.06 | 49000 | 0.1159 | 0.0633 |
| 0.0891 | 4.1 | 49500 | 0.1231 | 0.0654 |
| 0.1031 | 4.14 | 50000 | 0.1201 | 0.0643 |
| 0.0887 | 4.18 | 50500 | 0.1191 | 0.0648 |
| 0.083 | 4.22 | 51000 | 0.1164 | 0.0644 |
| 0.0904 | 4.26 | 51500 | 0.1162 | 0.0659 |
| 0.0819 | 4.3 | 52000 | 0.1185 | 0.0639 |
| 0.0833 | 4.35 | 52500 | 0.1096 | 0.0635 |
| 0.0787 | 4.39 | 53000 | 0.1264 | 0.0662 |
| 0.0845 | 4.43 | 53500 | 0.1209 | 0.0652 |
| 0.0852 | 4.47 | 54000 | 0.1213 | 0.0645 |
| 0.0772 | 4.51 | 54500 | 0.1210 | 0.0644 |
| 0.0817 | 4.55 | 55000 | 0.1260 | 0.0643 |
| 0.1003 | 4.59 | 55500 | 0.1243 | 0.0664 |
| 0.089 | 4.64 | 56000 | 0.1160 | 0.0637 |
| 0.0924 | 4.68 | 56500 | 0.1201 | 0.0661 |
| 0.0782 | 4.72 | 57000 | 0.1309 | 0.0677 |
| 0.0791 | 4.76 | 57500 | 0.1267 | 0.0661 |
| 0.0873 | 4.8 | 58000 | 0.1205 | 0.0649 |
| 0.0808 | 4.84 | 58500 | 0.1207 | 0.0648 |
| 0.0916 | 4.88 | 59000 | 0.1208 | 0.0623 |
| 0.0851 | 4.93 | 59500 | 0.1335 | 0.0650 |
| 0.0877 | 4.97 | 60000 | 0.1196 | 0.0619 |
| 0.0794 | 5.01 | 60500 | 0.1436 | 0.0689 |
| 0.0819 | 5.05 | 61000 | 0.1301 | 0.0647 |
| 0.0767 | 5.09 | 61500 | 0.1329 | 0.0650 |
| 0.0726 | 5.13 | 62000 | 0.1321 | 0.0654 |
| 0.0767 | 5.17 | 62500 | 0.1281 | 0.0668 |
| 0.0749 | 5.22 | 63000 | 0.1254 | 0.0631 |
| 0.0782 | 5.26 | 63500 | 0.1148 | 0.0606 |
| 0.0777 | 5.3 | 64000 | 0.1292 | 0.0641 |
| 0.0867 | 5.34 | 64500 | 0.1218 | 0.0644 |
| 0.0731 | 5.38 | 65000 | 0.1347 | 0.0653 |
| 0.0791 | 5.42 | 65500 | 0.1250 | 0.0611 |
| 0.0781 | 5.46 | 66000 | 0.1279 | 0.0647 |
| 0.0693 | 5.5 | 66500 | 0.1136 | 0.0607 |
| 0.0819 | 5.55 | 67000 | 0.1236 | 0.0629 |
| 0.0726 | 5.59 | 67500 | 0.1199 | 0.0619 |
| 0.0792 | 5.63 | 68000 | 0.1262 | 0.0638 |
| 0.0728 | 5.67 | 68500 | 0.1246 | 0.0630 |
| 0.0785 | 5.71 | 69000 | 0.1234 | 0.0627 |
| 0.0745 | 5.75 | 69500 | 0.1184 | 0.0627 |
| 0.0734 | 5.79 | 70000 | 0.1255 | 0.0628 |
| 0.0743 | 5.84 | 70500 | 0.1268 | 0.0615 |
| 0.0819 | 5.88 | 71000 | 0.1221 | 0.0632 |
| 0.0745 | 5.92 | 71500 | 0.1261 | 0.0636 |
| 0.0684 | 5.96 | 72000 | 0.1266 | 0.0617 |
| 0.0692 | 6.0 | 72500 | 0.1245 | 0.0620 |
| 0.0649 | 6.04 | 73000 | 0.1288 | 0.0628 |
| 0.0661 | 6.08 | 73500 | 0.1221 | 0.0623 |
| 0.0761 | 6.13 | 74000 | 0.1200 | 0.0638 |
| 0.0629 | 6.17 | 74500 | 0.1155 | 0.0605 |
| 0.0718 | 6.21 | 75000 | 0.1172 | 0.0608 |
| 0.071 | 6.25 | 75500 | 0.1301 | 0.0645 |
| 0.0745 | 6.29 | 76000 | 0.1396 | 0.0662 |
| 0.0708 | 6.33 | 76500 | 0.1321 | 0.0632 |
| 0.0678 | 6.37 | 77000 | 0.1456 | 0.0662 |
| 0.0729 | 6.42 | 77500 | 0.1398 | 0.0664 |
| 0.0672 | 6.46 | 78000 | 0.1389 | 0.0657 |
| 0.0785 | 6.5 | 78500 | 0.1260 | 0.0635 |
| 0.0677 | 6.54 | 79000 | 0.1337 | 0.0618 |
| 0.0671 | 6.58 | 79500 | 0.1314 | 0.0639 |
| 0.0725 | 6.62 | 80000 | 0.1204 | 0.0602 |
| 0.0686 | 6.66 | 80500 | 0.1270 | 0.0603 |
| 0.0672 | 6.71 | 81000 | 0.1298 | 0.0615 |
| 0.0774 | 6.75 | 81500 | 0.1390 | 0.0649 |
| 0.0694 | 6.79 | 82000 | 0.1239 | 0.0619 |
| 0.0689 | 6.83 | 82500 | 0.1307 | 0.0628 |
| 0.0693 | 6.87 | 83000 | 0.1245 | 0.0607 |
| 0.0669 | 6.91 | 83500 | 0.1276 | 0.0622 |
| 0.0684 | 6.95 | 84000 | 0.1296 | 0.0612 |
| 0.0656 | 7.0 | 84500 | 0.1267 | 0.0617 |
| 0.064 | 7.04 | 85000 | 0.1261 | 0.0611 |
| 0.0637 | 7.08 | 85500 | 0.1260 | 0.0606 |
| 0.0607 | 7.12 | 86000 | 0.1227 | 0.0601 |
| 0.0621 | 7.16 | 86500 | 0.1301 | 0.0615 |
| 0.0669 | 7.2 | 87000 | 0.1313 | 0.0615 |
| 0.0757 | 7.24 | 87500 | 0.1289 | 0.0618 |
| 0.0634 | 7.28 | 88000 | 0.1283 | 0.0623 |
| 0.0679 | 7.33 | 88500 | 0.1212 | 0.0595 |
| 0.0666 | 7.37 | 89000 | 0.1293 | 0.0603 |
| 0.0653 | 7.41 | 89500 | 0.1353 | 0.0636 |
| 0.0663 | 7.45 | 90000 | 0.1335 | 0.0613 |
| 0.0772 | 7.49 | 90500 | 0.1361 | 0.0650 |
| 0.0595 | 7.53 | 91000 | 0.1307 | 0.0627 |
| 0.0625 | 7.57 | 91500 | 0.1239 | 0.0603 |
| 0.0654 | 7.62 | 92000 | 0.1204 | 0.0615 |
| 0.0659 | 7.66 | 92500 | 0.1322 | 0.0622 |
| 0.0595 | 7.7 | 93000 | 0.1267 | 0.0595 |
| 0.0683 | 7.74 | 93500 | 0.1244 | 0.0608 |
| 0.0703 | 7.78 | 94000 | 0.1138 | 0.0595 |
| 0.0658 | 7.82 | 94500 | 0.1151 | 0.0580 |
| 0.0614 | 7.86 | 95000 | 0.1299 | 0.0613 |
| 0.069 | 7.91 | 95500 | 0.1299 | 0.0621 |
| 0.0663 | 7.95 | 96000 | 0.1282 | 0.0596 |
| 0.0665 | 7.99 | 96500 | 0.1290 | 0.0593 |
| 0.0631 | 8.03 | 97000 | 0.1380 | 0.0624 |
| 0.0617 | 8.07 | 97500 | 0.1438 | 0.0622 |
| 0.0673 | 8.11 | 98000 | 0.1337 | 0.0639 |
| 0.0643 | 8.15 | 98500 | 0.1330 | 0.0617 |
| 0.0637 | 8.2 | 99000 | 0.1364 | 0.0618 |
| 0.0677 | 8.24 | 99500 | 0.1300 | 0.0591 |
| 0.0589 | 8.28 | 100000 | 0.1327 | 0.0598 |
| 0.0625 | 8.32 | 100500 | 0.1321 | 0.0607 |
| 0.0603 | 8.36 | 101000 | 0.1360 | 0.0633 |
| 0.0582 | 8.4 | 101500 | 0.1365 | 0.0621 |
| 0.0748 | 8.44 | 102000 | 0.1417 | 0.0626 |
| 0.0608 | 8.49 | 102500 | 0.1275 | 0.0590 |
| 0.0581 | 8.53 | 103000 | 0.1330 | 0.0602 |
| 0.0589 | 8.57 | 103500 | 0.1400 | 0.0630 |
| 0.0642 | 8.61 | 104000 | 0.1278 | 0.0605 |
| 0.0564 | 8.65 | 104500 | 0.1425 | 0.0613 |
| 0.0638 | 8.69 | 105000 | 0.1312 | 0.0603 |
| 0.0677 | 8.73 | 105500 | 0.1253 | 0.0592 |
| 0.0695 | 8.77 | 106000 | 0.1452 | 0.0636 |
| 0.0581 | 8.82 | 106500 | 0.1379 | 0.0607 |
| 0.0593 | 8.86 | 107000 | 0.1294 | 0.0589 |
| 0.0597 | 8.9 | 107500 | 0.1243 | 0.0590 |
| 0.0559 | 8.94 | 108000 | 0.1343 | 0.0602 |
| 0.0525 | 8.98 | 108500 | 0.1360 | 0.0606 |
| 0.0558 | 9.02 | 109000 | 0.1387 | 0.0591 |
| 0.0491 | 9.06 | 109500 | 0.1443 | 0.0600 |
| 0.06 | 9.11 | 110000 | 0.1365 | 0.0587 |
| 0.0579 | 9.15 | 110500 | 0.1265 | 0.0586 |
| 0.0573 | 9.19 | 111000 | 0.1360 | 0.0594 |
| 0.0569 | 9.23 | 111500 | 0.1317 | 0.0599 |
| 0.0603 | 9.27 | 112000 | 0.1299 | 0.0598 |
| 0.065 | 9.31 | 112500 | 0.1269 | 0.0594 |
| 0.0561 | 9.35 | 113000 | 0.1301 | 0.0586 |
| 0.0542 | 9.4 | 113500 | 0.1333 | 0.0600 |
| 0.0622 | 9.44 | 114000 | 0.1225 | 0.0573 |
| 0.0534 | 9.48 | 114500 | 0.1314 | 0.0599 |
| 0.048 | 9.52 | 115000 | 0.1380 | 0.0589 |
| 0.0555 | 9.56 | 115500 | 0.1302 | 0.0593 |
| 0.0534 | 9.6 | 116000 | 0.1259 | 0.0575 |
| 0.0559 | 9.64 | 116500 | 0.1375 | 0.0581 |
| 0.0557 | 9.69 | 117000 | 0.1248 | 0.0580 |
| 0.0651 | 9.73 | 117500 | 0.1387 | 0.0603 |
| 0.0582 | 9.77 | 118000 | 0.1260 | 0.0577 |
| 0.0512 | 9.81 | 118500 | 0.1343 | 0.0600 |
| 0.061 | 9.85 | 119000 | 0.1338 | 0.0593 |
| 0.0628 | 9.89 | 119500 | 0.1400 | 0.0592 |
| 0.0605 | 9.93 | 120000 | 0.1421 | 0.0602 |
| 0.0541 | 9.98 | 120500 | 0.1256 | 0.0572 |
| 0.0568 | 10.02 | 121000 | 0.1363 | 0.0589 |
| 0.0537 | 10.06 | 121500 | 0.1358 | 0.0587 |
| 0.0505 | 10.1 | 122000 | 0.1300 | 0.0565 |
| 0.0545 | 10.14 | 122500 | 0.1365 | 0.0615 |
| 0.0541 | 10.18 | 123000 | 0.1387 | 0.0597 |
| 0.0472 | 10.22 | 123500 | 0.1293 | 0.0578 |
| 0.0494 | 10.26 | 124000 | 0.1295 | 0.0591 |
| 0.0566 | 10.31 | 124500 | 0.1417 | 0.0604 |
| 0.0497 | 10.35 | 125000 | 0.1469 | 0.0581 |
| 0.0519 | 10.39 | 125500 | 0.1336 | 0.0577 |
| 0.0467 | 10.43 | 126000 | 0.1458 | 0.0612 |
| 0.0547 | 10.47 | 126500 | 0.1424 | 0.0609 |
| 0.0484 | 10.51 | 127000 | 0.1218 | 0.0567 |
| 0.0541 | 10.55 | 127500 | 0.1281 | 0.0581 |
| 0.0518 | 10.6 | 128000 | 0.1246 | 0.0569 |
| 0.0542 | 10.64 | 128500 | 0.1327 | 0.0578 |
| 0.0553 | 10.68 | 129000 | 0.1359 | 0.0580 |
| 0.0567 | 10.72 | 129500 | 0.1279 | 0.0584 |
| 0.047 | 10.76 | 130000 | 0.1390 | 0.0590 |
| 0.0494 | 10.8 | 130500 | 0.1310 | 0.0587 |
| 0.0548 | 10.84 | 131000 | 0.1338 | 0.0565 |
| 0.0596 | 10.89 | 131500 | 0.1292 | 0.0565 |
| 0.0576 | 10.93 | 132000 | 0.1349 | 0.0585 |
| 0.055 | 10.97 | 132500 | 0.1364 | 0.0575 |
| 0.0496 | 11.01 | 133000 | 0.1328 | 0.0571 |
| 0.0537 | 11.05 | 133500 | 0.1387 | 0.0570 |
| 0.0526 | 11.09 | 134000 | 0.1282 | 0.0563 |
| 0.0481 | 11.13 | 134500 | 0.1325 | 0.0570 |
| 0.0553 | 11.18 | 135000 | 0.1350 | 0.0572 |
| 0.0496 | 11.22 | 135500 | 0.1332 | 0.0567 |
| 0.0471 | 11.26 | 136000 | 0.1493 | 0.0592 |
| 0.0518 | 11.3 | 136500 | 0.1276 | 0.0554 |
| 0.0513 | 11.34 | 137000 | 0.1422 | 0.0570 |
| 0.0468 | 11.38 | 137500 | 0.1395 | 0.0568 |
| 0.0538 | 11.42 | 138000 | 0.1327 | 0.0573 |
| 0.0445 | 11.47 | 138500 | 0.1409 | 0.0554 |
| 0.0473 | 11.51 | 139000 | 0.1467 | 0.0585 |
| 0.0556 | 11.55 | 139500 | 0.1551 | 0.0595 |
| 0.0468 | 11.59 | 140000 | 0.1397 | 0.0565 |
| 0.0509 | 11.63 | 140500 | 0.1370 | 0.0585 |
| 0.0481 | 11.67 | 141000 | 0.1334 | 0.0579 |
| 0.0499 | 11.71 | 141500 | 0.1279 | 0.0566 |
| 0.0562 | 11.75 | 142000 | 0.1432 | 0.0583 |
| 0.0488 | 11.8 | 142500 | 0.1448 | 0.0582 |
| 0.0547 | 11.84 | 143000 | 0.1358 | 0.0578 |
| 0.0464 | 11.88 | 143500 | 0.1399 | 0.0580 |
| 0.0507 | 11.92 | 144000 | 0.1419 | 0.0593 |
| 0.0509 | 11.96 | 144500 | 0.1339 | 0.0562 |
| 0.0447 | 12.0 | 145000 | 0.1302 | 0.0554 |
| 0.044 | 12.04 | 145500 | 0.1377 | 0.0560 |
| 0.0435 | 12.09 | 146000 | 0.1389 | 0.0584 |
| 0.0451 | 12.13 | 146500 | 0.1475 | 0.0592 |
| 0.0494 | 12.17 | 147000 | 0.1463 | 0.0571 |
| 0.0723 | 12.21 | 147500 | 0.1305 | 0.0551 |
| 0.0414 | 12.25 | 148000 | 0.1386 | 0.0550 |
| 0.0479 | 12.29 | 148500 | 0.1557 | 0.0565 |
| 0.0489 | 12.33 | 149000 | 0.1293 | 0.0547 |
| 0.0461 | 12.38 | 149500 | 0.1420 | 0.0570 |
| 0.0462 | 12.42 | 150000 | 0.1358 | 0.0566 |
| 0.0431 | 12.46 | 150500 | 0.1529 | 0.0587 |
| 0.0439 | 12.5 | 151000 | 0.1448 | 0.0571 |
| 0.0384 | 12.54 | 151500 | 0.1332 | 0.0553 |
| 0.0498 | 12.58 | 152000 | 0.1324 | 0.0555 |
| 0.0458 | 12.62 | 152500 | 0.1354 | 0.0549 |
| 0.0475 | 12.67 | 153000 | 0.1329 | 0.0555 |
| 0.0487 | 12.71 | 153500 | 0.1324 | 0.0565 |
| 0.0425 | 12.75 | 154000 | 0.1375 | 0.0553 |
| 0.043 | 12.79 | 154500 | 0.1354 | 0.0560 |
| 0.0515 | 12.83 | 155000 | 0.1379 | 0.0560 |
| 0.0494 | 12.87 | 155500 | 0.1455 | 0.0571 |
| 0.0525 | 12.91 | 156000 | 0.1345 | 0.0562 |
| 0.048 | 12.96 | 156500 | 0.1394 | 0.0550 |
| 0.0462 | 13.0 | 157000 | 0.1364 | 0.0564 |
| 0.0495 | 13.04 | 157500 | 0.1510 | 0.0572 |
| 0.0433 | 13.08 | 158000 | 0.1357 | 0.0547 |
| 0.0419 | 13.12 | 158500 | 0.1473 | 0.0554 |
| 0.0453 | 13.16 | 159000 | 0.1443 | 0.0565 |
| 0.043 | 13.2 | 159500 | 0.1622 | 0.0582 |
| 0.0404 | 13.25 | 160000 | 0.1548 | 0.0566 |
| 0.0396 | 13.29 | 160500 | 0.1470 | 0.0564 |
| 0.041 | 13.33 | 161000 | 0.1402 | 0.0557 |
| 0.0468 | 13.37 | 161500 | 0.1445 | 0.0568 |
| 0.0481 | 13.41 | 162000 | 0.1446 | 0.0578 |
| 0.0472 | 13.45 | 162500 | 0.1403 | 0.0553 |
| 0.0437 | 13.49 | 163000 | 0.1494 | 0.0566 |
| 0.0379 | 13.53 | 163500 | 0.1552 | 0.0563 |
| 0.0401 | 13.58 | 164000 | 0.1615 | 0.0610 |
| 0.0504 | 13.62 | 164500 | 0.1536 | 0.0577 |
| 0.0425 | 13.66 | 165000 | 0.1513 | 0.0583 |
| 0.0467 | 13.7 | 165500 | 0.1425 | 0.0575 |
| 0.0459 | 13.74 | 166000 | 0.1359 | 0.0551 |
| 0.0416 | 13.78 | 166500 | 0.1490 | 0.0566 |
| 0.0457 | 13.82 | 167000 | 0.1472 | 0.0560 |
| 0.0484 | 13.87 | 167500 | 0.1358 | 0.0554 |
| 0.0574 | 13.91 | 168000 | 0.1357 | 0.0564 |
| 0.0468 | 13.95 | 168500 | 0.1392 | 0.0569 |
| 0.0462 | 13.99 | 169000 | 0.1231 | 0.0541 |
| 0.0395 | 14.03 | 169500 | 0.1403 | 0.0558 |
| 0.0351 | 14.07 | 170000 | 0.1401 | 0.0536 |
| 0.0439 | 14.11 | 170500 | 0.1354 | 0.0546 |
| 0.0369 | 14.16 | 171000 | 0.1451 | 0.0557 |
| 0.0367 | 14.2 | 171500 | 0.1359 | 0.0555 |
| 0.041 | 14.24 | 172000 | 0.1400 | 0.0559 |
| 0.0414 | 14.28 | 172500 | 0.1494 | 0.0595 |
| 0.0443 | 14.32 | 173000 | 0.1441 | 0.0556 |
| 0.0456 | 14.36 | 173500 | 0.1404 | 0.0566 |
| 0.0441 | 14.4 | 174000 | 0.1362 | 0.0553 |
| 0.0536 | 14.45 | 174500 | 0.1378 | 0.0572 |
| 0.0394 | 14.49 | 175000 | 0.1493 | 0.0580 |
| 0.0401 | 14.53 | 175500 | 0.1477 | 0.0573 |
| 0.0408 | 14.57 | 176000 | 0.1499 | 0.0572 |
| 0.0405 | 14.61 | 176500 | 0.1435 | 0.0548 |
| 0.0476 | 14.65 | 177000 | 0.1425 | 0.0557 |
| 0.0439 | 14.69 | 177500 | 0.1373 | 0.0550 |
| 0.0397 | 14.74 | 178000 | 0.1476 | 0.0586 |
| 0.041 | 14.78 | 178500 | 0.1510 | 0.0580 |
| 0.0399 | 14.82 | 179000 | 0.1469 | 0.0558 |
| 0.0424 | 14.86 | 179500 | 0.1383 | 0.0561 |
| 0.0383 | 14.9 | 180000 | 0.1468 | 0.0561 |
| 0.0494 | 14.94 | 180500 | 0.1365 | 0.0562 |
| 0.0379 | 14.98 | 181000 | 0.1388 | 0.0544 |
| 0.0416 | 15.02 | 181500 | 0.1446 | 0.0554 |
| 0.0391 | 15.07 | 182000 | 0.1368 | 0.0547 |
| 0.0381 | 15.11 | 182500 | 0.1466 | 0.0559 |
| 0.0398 | 15.15 | 183000 | 0.1435 | 0.0576 |
| 0.0359 | 15.19 | 183500 | 0.1523 | 0.0569 |
| 0.0368 | 15.23 | 184000 | 0.1424 | 0.0553 |
| 0.0395 | 15.27 | 184500 | 0.1494 | 0.0553 |
| 0.0474 | 15.31 | 185000 | 0.1459 | 0.0566 |
| 0.045 | 15.36 | 185500 | 0.1432 | 0.0562 |
| 0.0358 | 15.4 | 186000 | 0.1542 | 0.0569 |
| 0.0391 | 15.44 | 186500 | 0.1420 | 0.0544 |
| 0.0401 | 15.48 | 187000 | 0.1398 | 0.0561 |
| 0.0356 | 15.52 | 187500 | 0.1498 | 0.0552 |
| 0.0385 | 15.56 | 188000 | 0.1516 | 0.0578 |
| 0.0423 | 15.6 | 188500 | 0.1406 | 0.0557 |
| 0.0416 | 15.65 | 189000 | 0.1473 | 0.0564 |
| 0.0407 | 15.69 | 189500 | 0.1359 | 0.0570 |
| 0.0459 | 15.73 | 190000 | 0.1483 | 0.0575 |
| 0.0363 | 15.77 | 190500 | 0.1427 | 0.0556 |
| 0.0379 | 15.81 | 191000 | 0.1471 | 0.0557 |
| 0.0438 | 15.85 | 191500 | 0.1412 | 0.0545 |
| 0.0408 | 15.89 | 192000 | 0.1483 | 0.0571 |
| 0.0458 | 15.94 | 192500 | 0.1462 | 0.0604 |
| 0.0398 | 15.98 | 193000 | 0.1560 | 0.0578 |
| 0.0391 | 16.02 | 193500 | 0.1424 | 0.0566 |
| 0.0371 | 16.06 | 194000 | 0.1486 | 0.0564 |
| 0.0359 | 16.1 | 194500 | 0.1517 | 0.0563 |
| 0.0363 | 16.14 | 195000 | 0.1376 | 0.0543 |
| 0.0401 | 16.18 | 195500 | 0.1443 | 0.0565 |
| 0.0423 | 16.23 | 196000 | 0.1452 | 0.0553 |
| 0.041 | 16.27 | 196500 | 0.1451 | 0.0552 |
| 0.0409 | 16.31 | 197000 | 0.1566 | 0.0576 |
| 0.0401 | 16.35 | 197500 | 0.1567 | 0.0574 |
| 0.047 | 16.39 | 198000 | 0.1339 | 0.0548 |
| 0.0399 | 16.43 | 198500 | 0.1440 | 0.0543 |
| 0.041 | 16.47 | 199000 | 0.1504 | 0.0576 |
| 0.0394 | 16.51 | 199500 | 0.1534 | 0.0568 |
| 0.0386 | 16.56 | 200000 | 0.1497 | 0.0583 |
| 0.0313 | 16.6 | 200500 | 0.1624 | 0.0581 |
| 0.0418 | 16.64 | 201000 | 0.1473 | 0.0553 |
| 0.0401 | 16.68 | 201500 | 0.1420 | 0.0554 |
| 0.0429 | 16.72 | 202000 | 0.1534 | 0.0559 |
| 0.0424 | 16.76 | 202500 | 0.1416 | 0.0546 |
| 0.0487 | 16.8 | 203000 | 0.1487 | 0.0568 |
| 0.0434 | 16.85 | 203500 | 0.1524 | 0.0564 |
| 0.0388 | 16.89 | 204000 | 0.1624 | 0.0590 |
| 0.0393 | 16.93 | 204500 | 0.1593 | 0.0558 |
| 0.0457 | 16.97 | 205000 | 0.1516 | 0.0574 |
| 0.0413 | 17.01 | 205500 | 0.1497 | 0.0558 |
| 0.0367 | 17.05 | 206000 | 0.1513 | 0.0549 |
| 0.0402 | 17.09 | 206500 | 0.1535 | 0.0564 |
| 0.0349 | 17.14 | 207000 | 0.1539 | 0.0541 |
| 0.0384 | 17.18 | 207500 | 0.1430 | 0.0534 |
| 0.0399 | 17.22 | 208000 | 0.1515 | 0.0533 |
| 0.0393 | 17.26 | 208500 | 0.1529 | 0.0538 |
| 0.0344 | 17.3 | 209000 | 0.1445 | 0.0535 |
| 0.0394 | 17.34 | 209500 | 0.1472 | 0.0542 |
| 0.0496 | 17.38 | 210000 | 0.1675 | 0.0580 |
| 0.0355 | 17.43 | 210500 | 0.1649 | 0.0551 |
| 0.0322 | 17.47 | 211000 | 0.1658 | 0.0579 |
| 0.0358 | 17.51 | 211500 | 0.1597 | 0.0558 |
| 0.0345 | 17.55 | 212000 | 0.1587 | 0.0547 |
| 0.0387 | 17.59 | 212500 | 0.1570 | 0.0546 |
| 0.0369 | 17.63 | 213000 | 0.1591 | 0.0546 |
| 0.0397 | 17.67 | 213500 | 0.1564 | 0.0548 |
| 0.0369 | 17.72 | 214000 | 0.1515 | 0.0541 |
| 0.0392 | 17.76 | 214500 | 0.1544 | 0.0539 |
| 0.0345 | 17.8 | 215000 | 0.1509 | 0.0542 |
| 0.0397 | 17.84 | 215500 | 0.1377 | 0.0539 |
| 0.0385 | 17.88 | 216000 | 0.1523 | 0.0539 |
| 0.0374 | 17.92 | 216500 | 0.1582 | 0.0548 |
| 0.0415 | 17.96 | 217000 | 0.1591 | 0.0547 |
| 0.0282 | 18.0 | 217500 | 0.1539 | 0.0535 |
| 0.0461 | 18.05 | 218000 | 0.1549 | 0.0552 |
| 0.0312 | 18.09 | 218500 | 0.1632 | 0.0554 |
| 0.0317 | 18.13 | 219000 | 0.1562 | 0.0548 |
| 0.0354 | 18.17 | 219500 | 0.1510 | 0.0538 |
| 0.0396 | 18.21 | 220000 | 0.1666 | 0.0560 |
| 0.0451 | 18.25 | 220500 | 0.1614 | 0.0571 |
| 0.0302 | 18.29 | 221000 | 0.1540 | 0.0540 |
| 0.0372 | 18.34 | 221500 | 0.1503 | 0.0540 |
| 0.0314 | 18.38 | 222000 | 0.1588 | 0.0564 |
| 0.0318 | 18.42 | 222500 | 0.1605 | 0.0556 |
| 0.0421 | 18.46 | 223000 | 0.1663 | 0.0568 |
| 0.0391 | 18.5 | 223500 | 0.1606 | 0.0570 |
| 0.0343 | 18.54 | 224000 | 0.1617 | 0.0560 |
| 0.0402 | 18.58 | 224500 | 0.1451 | 0.0546 |
| 0.035 | 18.63 | 225000 | 0.1486 | 0.0523 |
| 0.0289 | 18.67 | 225500 | 0.1577 | 0.0537 |
| 0.0307 | 18.71 | 226000 | 0.1668 | 0.0558 |
| 0.0332 | 18.75 | 226500 | 0.1640 | 0.0548 |
| 0.034 | 18.79 | 227000 | 0.1552 | 0.0543 |
| 0.0346 | 18.83 | 227500 | 0.1626 | 0.0555 |
| 0.0361 | 18.87 | 228000 | 0.1564 | 0.0561 |
| 0.0343 | 18.92 | 228500 | 0.1472 | 0.0540 |
| 0.0324 | 18.96 | 229000 | 0.1600 | 0.0552 |
| 0.0304 | 19.0 | 229500 | 0.1526 | 0.0533 |
| 0.036 | 19.04 | 230000 | 0.1527 | 0.0540 |
| 0.0303 | 19.08 | 230500 | 0.1736 | 0.0559 |
| 0.0389 | 19.12 | 231000 | 0.1622 | 0.0554 |
| 0.0346 | 19.16 | 231500 | 0.1689 | 0.0543 |
| 0.0328 | 19.21 | 232000 | 0.1665 | 0.0560 |
| 0.0321 | 19.25 | 232500 | 0.1618 | 0.0562 |
| 0.0354 | 19.29 | 233000 | 0.1518 | 0.0537 |
| 0.0352 | 19.33 | 233500 | 0.1497 | 0.0530 |
| 0.0378 | 19.37 | 234000 | 0.1584 | 0.0537 |
| 0.0367 | 19.41 | 234500 | 0.1473 | 0.0532 |
| 0.0351 | 19.45 | 235000 | 0.1574 | 0.0542 |
| 0.0281 | 19.5 | 235500 | 0.1607 | 0.0534 |
| 0.029 | 19.54 | 236000 | 0.1597 | 0.0534 |
| 0.0327 | 19.58 | 236500 | 0.1515 | 0.0534 |
| 0.0355 | 19.62 | 237000 | 0.1611 | 0.0568 |
| 0.0308 | 19.66 | 237500 | 0.1428 | 0.0527 |
| 0.0308 | 19.7 | 238000 | 0.1504 | 0.0525 |
| 0.0338 | 19.74 | 238500 | 0.1609 | 0.0542 |
| 0.0341 | 19.78 | 239000 | 0.1464 | 0.0541 |
| 0.0349 | 19.83 | 239500 | 0.1549 | 0.0532 |
| 0.0352 | 19.87 | 240000 | 0.1591 | 0.0550 |
| 0.0377 | 19.91 | 240500 | 0.1582 | 0.0557 |
| 0.0283 | 19.95 | 241000 | 0.1482 | 0.0526 |
| 0.0344 | 19.99 | 241500 | 0.1426 | 0.0529 |
| 0.0329 | 20.03 | 242000 | 0.1597 | 0.0562 |
| 0.0293 | 20.07 | 242500 | 0.1647 | 0.0546 |
| 0.029 | 20.12 | 243000 | 0.1536 | 0.0515 |
| 0.0336 | 20.16 | 243500 | 0.1497 | 0.0540 |
| 0.0325 | 20.2 | 244000 | 0.1551 | 0.0547 |
| 0.0268 | 20.24 | 244500 | 0.1528 | 0.0551 |
| 0.0298 | 20.28 | 245000 | 0.1585 | 0.0543 |
| 0.0323 | 20.32 | 245500 | 0.1605 | 0.0548 |
| 0.0298 | 20.36 | 246000 | 0.1617 | 0.0552 |
| 0.027 | 20.41 | 246500 | 0.1769 | 0.0571 |
| 0.0286 | 20.45 | 247000 | 0.1620 | 0.0544 |
| 0.0344 | 20.49 | 247500 | 0.1583 | 0.0547 |
| 0.0369 | 20.53 | 248000 | 0.1596 | 0.0537 |
| 0.0357 | 20.57 | 248500 | 0.1662 | 0.0553 |
| 0.031 | 20.61 | 249000 | 0.1619 | 0.0540 |
| 0.042 | 20.65 | 249500 | 0.1494 | 0.0531 |
| 0.0342 | 20.7 | 250000 | 0.1556 | 0.0535 |
| 0.0304 | 20.74 | 250500 | 0.1506 | 0.0531 |
| 0.0339 | 20.78 | 251000 | 0.1524 | 0.0530 |
| 0.0305 | 20.82 | 251500 | 0.1668 | 0.0563 |
| 0.0308 | 20.86 | 252000 | 0.1633 | 0.0549 |
| 0.0322 | 20.9 | 252500 | 0.1633 | 0.0540 |
| 0.0268 | 20.94 | 253000 | 0.1593 | 0.0521 |
| 0.0352 | 20.99 | 253500 | 0.1568 | 0.0533 |
| 0.0247 | 21.03 | 254000 | 0.1721 | 0.0530 |
| 0.0342 | 21.07 | 254500 | 0.1706 | 0.0551 |
| 0.0296 | 21.11 | 255000 | 0.1626 | 0.0527 |
| 0.032 | 21.15 | 255500 | 0.1463 | 0.0518 |
| 0.0349 | 21.19 | 256000 | 0.1480 | 0.0527 |
| 0.034 | 21.23 | 256500 | 0.1469 | 0.0518 |
| 0.0338 | 21.27 | 257000 | 0.1421 | 0.0520 |
| 0.0289 | 21.32 | 257500 | 0.1531 | 0.0536 |
| 0.0253 | 21.36 | 258000 | 0.1587 | 0.0534 |
| 0.0287 | 21.4 | 258500 | 0.1566 | 0.0532 |
| 0.0279 | 21.44 | 259000 | 0.1634 | 0.0536 |
| 0.0318 | 21.48 | 259500 | 0.1576 | 0.0535 |
| 0.028 | 21.52 | 260000 | 0.1623 | 0.0546 |
| 0.0303 | 21.56 | 260500 | 0.1529 | 0.0523 |
| 0.0304 | 21.61 | 261000 | 0.1683 | 0.0553 |
| 0.034 | 21.65 | 261500 | 0.1735 | 0.0550 |
| 0.03 | 21.69 | 262000 | 0.1754 | 0.0573 |
| 0.0308 | 21.73 | 262500 | 0.1614 | 0.0533 |
| 0.0292 | 21.77 | 263000 | 0.1540 | 0.0520 |
| 0.0274 | 21.81 | 263500 | 0.1603 | 0.0523 |
| 0.0318 | 21.85 | 264000 | 0.1560 | 0.0522 |
| 0.0302 | 21.9 | 264500 | 0.1543 | 0.0531 |
| 0.0263 | 21.94 | 265000 | 0.1633 | 0.0530 |
| 0.0292 | 21.98 | 265500 | 0.1508 | 0.0517 |
| 0.0255 | 22.02 | 266000 | 0.1707 | 0.0527 |
| 0.0279 | 22.06 | 266500 | 0.1650 | 0.0528 |
| 0.0307 | 22.1 | 267000 | 0.1576 | 0.0510 |
| 0.0303 | 22.14 | 267500 | 0.1577 | 0.0520 |
| 0.0283 | 22.19 | 268000 | 0.1618 | 0.0524 |
| 0.026 | 22.23 | 268500 | 0.1564 | 0.0522 |
| 0.0284 | 22.27 | 269000 | 0.1595 | 0.0539 |
| 0.0275 | 22.31 | 269500 | 0.1650 | 0.0531 |
| 0.0356 | 22.35 | 270000 | 0.1606 | 0.0544 |
| 0.0309 | 22.39 | 270500 | 0.1617 | 0.0547 |
| 0.0294 | 22.43 | 271000 | 0.1527 | 0.0527 |
| 0.0273 | 22.48 | 271500 | 0.1540 | 0.0522 |
| 0.0225 | 22.52 | 272000 | 0.1518 | 0.0514 |
| 0.0273 | 22.56 | 272500 | 0.1518 | 0.0521 |
| 0.0269 | 22.6 | 273000 | 0.1548 | 0.0516 |
| 0.0228 | 22.64 | 273500 | 0.1546 | 0.0519 |
| 0.0265 | 22.68 | 274000 | 0.1548 | 0.0523 |
| 0.0287 | 22.72 | 274500 | 0.1556 | 0.0514 |
| 0.029 | 22.76 | 275000 | 0.1671 | 0.0525 |
| 0.0301 | 22.81 | 275500 | 0.1548 | 0.0519 |
| 0.0274 | 22.85 | 276000 | 0.1567 | 0.0522 |
| 0.027 | 22.89 | 276500 | 0.1656 | 0.0510 |
| 0.0317 | 22.93 | 277000 | 0.1555 | 0.0519 |
| 0.0314 | 22.97 | 277500 | 0.1549 | 0.0518 |
| 0.0262 | 23.01 | 278000 | 0.1516 | 0.0514 |
| 0.0258 | 23.05 | 278500 | 0.1661 | 0.0533 |
| 0.0252 | 23.1 | 279000 | 0.1630 | 0.0522 |
| 0.0295 | 23.14 | 279500 | 0.1633 | 0.0540 |
| 0.0261 | 23.18 | 280000 | 0.1679 | 0.0538 |
| 0.0254 | 23.22 | 280500 | 0.1615 | 0.0528 |
| 0.024 | 23.26 | 281000 | 0.1546 | 0.0522 |
| 0.0269 | 23.3 | 281500 | 0.1526 | 0.0517 |
| 0.0273 | 23.34 | 282000 | 0.1540 | 0.0509 |
| 0.0246 | 23.39 | 282500 | 0.1646 | 0.0528 |
| 0.0246 | 23.43 | 283000 | 0.1587 | 0.0515 |
| 0.027 | 23.47 | 283500 | 0.1602 | 0.0521 |
| 0.0259 | 23.51 | 284000 | 0.1660 | 0.0532 |
| 0.0223 | 23.55 | 284500 | 0.1678 | 0.0539 |
| 0.0299 | 23.59 | 285000 | 0.1498 | 0.0515 |
| 0.0271 | 23.63 | 285500 | 0.1506 | 0.0506 |
| 0.0295 | 23.68 | 286000 | 0.1596 | 0.0531 |
| 0.024 | 23.72 | 286500 | 0.1570 | 0.0523 |
| 0.025 | 23.76 | 287000 | 0.1546 | 0.0521 |
| 0.0254 | 23.8 | 287500 | 0.1636 | 0.0529 |
| 0.0293 | 23.84 | 288000 | 0.1662 | 0.0528 |
| 0.0243 | 23.88 | 288500 | 0.1677 | 0.0542 |
| 0.0258 | 23.92 | 289000 | 0.1630 | 0.0523 |
| 0.0308 | 23.97 | 289500 | 0.1647 | 0.0541 |
| 0.0258 | 24.01 | 290000 | 0.1738 | 0.0532 |
| 0.0209 | 24.05 | 290500 | 0.1718 | 0.0540 |
| 0.0253 | 24.09 | 291000 | 0.1723 | 0.0543 |
| 0.0275 | 24.13 | 291500 | 0.1687 | 0.0534 |
| 0.0297 | 24.17 | 292000 | 0.1606 | 0.0529 |
| 0.0321 | 24.21 | 292500 | 0.1571 | 0.0539 |
| 0.0258 | 24.25 | 293000 | 0.1593 | 0.0535 |
| 0.0327 | 24.3 | 293500 | 0.1688 | 0.0538 |
| 0.0285 | 24.34 | 294000 | 0.1767 | 0.0536 |
| 0.0221 | 24.38 | 294500 | 0.1742 | 0.0533 |
| 0.0302 | 24.42 | 295000 | 0.1696 | 0.0538 |
| 0.0298 | 24.46 | 295500 | 0.1622 | 0.0518 |
| 0.0233 | 24.5 | 296000 | 0.1676 | 0.0538 |
| 0.0272 | 24.54 | 296500 | 0.1702 | 0.0534 |
| 0.0277 | 24.59 | 297000 | 0.1631 | 0.0533 |
| 0.0225 | 24.63 | 297500 | 0.1601 | 0.0509 |
| 0.025 | 24.67 | 298000 | 0.1597 | 0.0519 |
| 0.0275 | 24.71 | 298500 | 0.1514 | 0.0517 |
| 0.029 | 24.75 | 299000 | 0.1570 | 0.0515 |
| 0.0271 | 24.79 | 299500 | 0.1503 | 0.0509 |
| 0.0218 | 24.83 | 300000 | 0.1633 | 0.0522 |
| 0.027 | 24.88 | 300500 | 0.1586 | 0.0517 |
| 0.0223 | 24.92 | 301000 | 0.1583 | 0.0513 |
| 0.028 | 24.96 | 301500 | 0.1591 | 0.0517 |
| 0.0273 | 25.0 | 302000 | 0.1565 | 0.0501 |
| 0.0324 | 25.04 | 302500 | 0.1598 | 0.0525 |
| 0.0204 | 25.08 | 303000 | 0.1735 | 0.0521 |
| 0.0254 | 25.12 | 303500 | 0.1629 | 0.0520 |
| 0.0284 | 25.17 | 304000 | 0.1652 | 0.0514 |
| 0.0206 | 25.21 | 304500 | 0.1705 | 0.0517 |
| 0.0259 | 25.25 | 305000 | 0.1606 | 0.0510 |
| 0.0229 | 25.29 | 305500 | 0.1653 | 0.0504 |
| 0.0204 | 25.33 | 306000 | 0.1648 | 0.0493 |
| 0.0304 | 25.37 | 306500 | 0.1615 | 0.0491 |
| 0.0267 | 25.41 | 307000 | 0.1554 | 0.0503 |
| 0.0247 | 25.46 | 307500 | 0.1560 | 0.0490 |
| 0.0288 | 25.5 | 308000 | 0.1705 | 0.0518 |
| 0.0289 | 25.54 | 308500 | 0.1560 | 0.0510 |
| 0.0213 | 25.58 | 309000 | 0.1643 | 0.0520 |
| 0.03 | 25.62 | 309500 | 0.1508 | 0.0500 |
| 0.0255 | 25.66 | 310000 | 0.1597 | 0.0504 |
| 0.0302 | 25.7 | 310500 | 0.1577 | 0.0520 |
| 0.0252 | 25.75 | 311000 | 0.1573 | 0.0505 |
| 0.0275 | 25.79 | 311500 | 0.1495 | 0.0498 |
| 0.0297 | 25.83 | 312000 | 0.1524 | 0.0500 |
| 0.0251 | 25.87 | 312500 | 0.1585 | 0.0495 |
| 0.023 | 25.91 | 313000 | 0.1582 | 0.0501 |
| 0.0236 | 25.95 | 313500 | 0.1558 | 0.0487 |
| 0.0287 | 25.99 | 314000 | 0.1619 | 0.0497 |
| 0.0215 | 26.03 | 314500 | 0.1698 | 0.0507 |
| 0.0244 | 26.08 | 315000 | 0.1674 | 0.0490 |
| 0.0243 | 26.12 | 315500 | 0.1559 | 0.0494 |
| 0.0236 | 26.16 | 316000 | 0.1723 | 0.0503 |
| 0.021 | 26.2 | 316500 | 0.1623 | 0.0501 |
| 0.0242 | 26.24 | 317000 | 0.1656 | 0.0503 |
| 0.0243 | 26.28 | 317500 | 0.1583 | 0.0495 |
| 0.0196 | 26.32 | 318000 | 0.1700 | 0.0490 |
| 0.0191 | 26.37 | 318500 | 0.1659 | 0.0496 |
| 0.0242 | 26.41 | 319000 | 0.1606 | 0.0492 |
| 0.0227 | 26.45 | 319500 | 0.1553 | 0.0492 |
| 0.0211 | 26.49 | 320000 | 0.1535 | 0.0492 |
| 0.0206 | 26.53 | 320500 | 0.1610 | 0.0492 |
| 0.0211 | 26.57 | 321000 | 0.1597 | 0.0486 |
| 0.023 | 26.61 | 321500 | 0.1543 | 0.0480 |
| 0.0224 | 26.66 | 322000 | 0.1678 | 0.0495 |
| 0.025 | 26.7 | 322500 | 0.1659 | 0.0499 |
| 0.0235 | 26.74 | 323000 | 0.1627 | 0.0490 |
| 0.0253 | 26.78 | 323500 | 0.1733 | 0.0513 |
| 0.0217 | 26.82 | 324000 | 0.1697 | 0.0506 |
| 0.0209 | 26.86 | 324500 | 0.1684 | 0.0507 |
| 0.0243 | 26.9 | 325000 | 0.1633 | 0.0501 |
| 0.026 | 26.95 | 325500 | 0.1698 | 0.0499 |
| 0.024 | 26.99 | 326000 | 0.1605 | 0.0498 |
| 0.0199 | 27.03 | 326500 | 0.1662 | 0.0498 |
| 0.0231 | 27.07 | 327000 | 0.1645 | 0.0500 |
| 0.0268 | 27.11 | 327500 | 0.1649 | 0.0502 |
| 0.026 | 27.15 | 328000 | 0.1670 | 0.0493 |
| 0.0231 | 27.19 | 328500 | 0.1662 | 0.0502 |
| 0.0227 | 27.24 | 329000 | 0.1704 | 0.0510 |
| 0.0215 | 27.28 | 329500 | 0.1617 | 0.0478 |
| 0.025 | 27.32 | 330000 | 0.1702 | 0.0499 |
| 0.0249 | 27.36 | 330500 | 0.1695 | 0.0503 |
| 0.0235 | 27.4 | 331000 | 0.1620 | 0.0493 |
| 0.03 | 27.44 | 331500 | 0.1772 | 0.0501 |
| 0.027 | 27.48 | 332000 | 0.1728 | 0.0509 |
| 0.0301 | 27.52 | 332500 | 0.1661 | 0.0502 |
| 0.0208 | 27.57 | 333000 | 0.1712 | 0.0509 |
| 0.0226 | 27.61 | 333500 | 0.1704 | 0.0503 |
| 0.0241 | 27.65 | 334000 | 0.1608 | 0.0491 |
| 0.0217 | 27.69 | 334500 | 0.1703 | 0.0493 |
| 0.0254 | 27.73 | 335000 | 0.1713 | 0.0504 |
| 0.0196 | 27.77 | 335500 | 0.1742 | 0.0508 |
| 0.0253 | 27.81 | 336000 | 0.1676 | 0.0503 |
| 0.0279 | 27.86 | 336500 | 0.1737 | 0.0511 |
| 0.0212 | 27.9 | 337000 | 0.1649 | 0.0503 |
| 0.0262 | 27.94 | 337500 | 0.1631 | 0.0491 |
| 0.0257 | 27.98 | 338000 | 0.1654 | 0.0497 |
| 0.0201 | 28.02 | 338500 | 0.1735 | 0.0496 |
| 0.0265 | 28.06 | 339000 | 0.1622 | 0.0485 |
| 0.0244 | 28.1 | 339500 | 0.1754 | 0.0501 |
| 0.0241 | 28.15 | 340000 | 0.1689 | 0.0502 |
| 0.0227 | 28.19 | 340500 | 0.1737 | 0.0489 |
| 0.0218 | 28.23 | 341000 | 0.1685 | 0.0496 |
| 0.0261 | 28.27 | 341500 | 0.1682 | 0.0494 |
| 0.0207 | 28.31 | 342000 | 0.1624 | 0.0485 |
| 0.0273 | 28.35 | 342500 | 0.1701 | 0.0497 |
| 0.0203 | 28.39 | 343000 | 0.1744 | 0.0511 |
| 0.0181 | 28.44 | 343500 | 0.1856 | 0.0505 |
| 0.023 | 28.48 | 344000 | 0.1673 | 0.0491 |
| 0.022 | 28.52 | 344500 | 0.1730 | 0.0505 |
| 0.0209 | 28.56 | 345000 | 0.1752 | 0.0497 |
| 0.0191 | 28.6 | 345500 | 0.1793 | 0.0502 |
| 0.0186 | 28.64 | 346000 | 0.1799 | 0.0499 |
| 0.026 | 28.68 | 346500 | 0.1725 | 0.0496 |
| 0.0233 | 28.73 | 347000 | 0.1717 | 0.0497 |
| 0.024 | 28.77 | 347500 | 0.1718 | 0.0497 |
| 0.0175 | 28.81 | 348000 | 0.1797 | 0.0507 |
| 0.0228 | 28.85 | 348500 | 0.1776 | 0.0504 |
| 0.0197 | 28.89 | 349000 | 0.1754 | 0.0510 |
| 0.0221 | 28.93 | 349500 | 0.1797 | 0.0510 |
| 0.0206 | 28.97 | 350000 | 0.1693 | 0.0492 |
| 0.0205 | 29.01 | 350500 | 0.1735 | 0.0499 |
| 0.0214 | 29.06 | 351000 | 0.1763 | 0.0497 |
| 0.0219 | 29.1 | 351500 | 0.1813 | 0.0502 |
| 0.023 | 29.14 | 352000 | 0.1717 | 0.0500 |
| 0.0233 | 29.18 | 352500 | 0.1690 | 0.0496 |
| 0.0226 | 29.22 | 353000 | 0.1833 | 0.0518 |
| 0.0177 | 29.26 | 353500 | 0.1814 | 0.0516 |
| 0.0218 | 29.3 | 354000 | 0.1745 | 0.0506 |
| 0.0222 | 29.35 | 354500 | 0.1736 | 0.0505 |
| 0.0209 | 29.39 | 355000 | 0.1664 | 0.0496 |
| 0.0165 | 29.43 | 355500 | 0.1673 | 0.0495 |
| 0.0221 | 29.47 | 356000 | 0.1736 | 0.0498 |
| 0.018 | 29.51 | 356500 | 0.1725 | 0.0495 |
| 0.0233 | 29.55 | 357000 | 0.1715 | 0.0505 |
| 0.0201 | 29.59 | 357500 | 0.1723 | 0.0507 |
| 0.0234 | 29.64 | 358000 | 0.1670 | 0.0504 |
| 0.0232 | 29.68 | 358500 | 0.1716 | 0.0502 |
| 0.0193 | 29.72 | 359000 | 0.1744 | 0.0506 |
| 0.0193 | 29.76 | 359500 | 0.1707 | 0.0492 |
| 0.0192 | 29.8 | 360000 | 0.1732 | 0.0491 |
| 0.0197 | 29.84 | 360500 | 0.1739 | 0.0502 |
| 0.0196 | 29.88 | 361000 | 0.1785 | 0.0499 |
| 0.0159 | 29.93 | 361500 | 0.1750 | 0.0486 |
| 0.0201 | 29.97 | 362000 | 0.1697 | 0.0494 |
| 0.024 | 30.01 | 362500 | 0.1769 | 0.0493 |
| 0.0196 | 30.05 | 363000 | 0.1757 | 0.0491 |
| 0.0185 | 30.09 | 363500 | 0.1803 | 0.0494 |
| 0.0194 | 30.13 | 364000 | 0.1770 | 0.0496 |
| 0.0191 | 30.17 | 364500 | 0.1768 | 0.0484 |
| 0.025 | 30.22 | 365000 | 0.1808 | 0.0505 |
| 0.0274 | 30.26 | 365500 | 0.1728 | 0.0485 |
| 0.017 | 30.3 | 366000 | 0.1775 | 0.0497 |
| 0.0255 | 30.34 | 366500 | 0.1764 | 0.0502 |
| 0.0226 | 30.38 | 367000 | 0.1733 | 0.0492 |
| 0.0194 | 30.42 | 367500 | 0.1837 | 0.0506 |
| 0.0198 | 30.46 | 368000 | 0.1803 | 0.0493 |
| 0.0173 | 30.5 | 368500 | 0.1849 | 0.0495 |
| 0.0203 | 30.55 | 369000 | 0.1811 | 0.0489 |
| 0.0205 | 30.59 | 369500 | 0.1722 | 0.0491 |
| 0.0191 | 30.63 | 370000 | 0.1744 | 0.0488 |
| 0.0149 | 30.67 | 370500 | 0.1775 | 0.0483 |
| 0.0216 | 30.71 | 371000 | 0.1757 | 0.0484 |
| 0.0206 | 30.75 | 371500 | 0.1786 | 0.0480 |
| 0.0169 | 30.79 | 372000 | 0.1799 | 0.0489 |
| 0.0237 | 30.84 | 372500 | 0.1774 | 0.0491 |
| 0.0187 | 30.88 | 373000 | 0.1776 | 0.0479 |
| 0.0201 | 30.92 | 373500 | 0.1836 | 0.0505 |
| 0.0181 | 30.96 | 374000 | 0.1773 | 0.0485 |
| 0.0157 | 31.0 | 374500 | 0.1779 | 0.0481 |
| 0.022 | 31.04 | 375000 | 0.1709 | 0.0474 |
| 0.0196 | 31.08 | 375500 | 0.1702 | 0.0481 |
| 0.0167 | 31.13 | 376000 | 0.1842 | 0.0489 |
| 0.018 | 31.17 | 376500 | 0.1849 | 0.0487 |
| 0.0168 | 31.21 | 377000 | 0.1805 | 0.0493 |
| 0.0175 | 31.25 | 377500 | 0.1892 | 0.0498 |
| 0.0188 | 31.29 | 378000 | 0.1807 | 0.0484 |
| 0.0179 | 31.33 | 378500 | 0.1798 | 0.0492 |
| 0.0159 | 31.37 | 379000 | 0.1870 | 0.0491 |
| 0.0205 | 31.42 | 379500 | 0.1824 | 0.0489 |
| 0.019 | 31.46 | 380000 | 0.1823 | 0.0493 |
| 0.0234 | 31.5 | 380500 | 0.1794 | 0.0482 |
| 0.0209 | 31.54 | 381000 | 0.1840 | 0.0491 |
| 0.0179 | 31.58 | 381500 | 0.1791 | 0.0483 |
| 0.017 | 31.62 | 382000 | 0.1858 | 0.0490 |
| 0.0194 | 31.66 | 382500 | 0.1883 | 0.0496 |
| 0.0194 | 31.71 | 383000 | 0.1867 | 0.0491 |
| 0.0202 | 31.75 | 383500 | 0.1834 | 0.0484 |
| 0.0162 | 31.79 | 384000 | 0.1811 | 0.0488 |
| 0.0183 | 31.83 | 384500 | 0.1785 | 0.0478 |
| 0.0172 | 31.87 | 385000 | 0.1798 | 0.0479 |
| 0.0184 | 31.91 | 385500 | 0.1772 | 0.0479 |
| 0.0178 | 31.95 | 386000 | 0.1784 | 0.0484 |
| 0.0145 | 32.0 | 386500 | 0.1925 | 0.0493 |
| 0.0168 | 32.04 | 387000 | 0.1962 | 0.0495 |
| 0.021 | 32.08 | 387500 | 0.1880 | 0.0501 |
| 0.0194 | 32.12 | 388000 | 0.1847 | 0.0490 |
| 0.0184 | 32.16 | 388500 | 0.1839 | 0.0489 |
| 0.0185 | 32.2 | 389000 | 0.1855 | 0.0496 |
| 0.0239 | 32.24 | 389500 | 0.1817 | 0.0494 |
| 0.0196 | 32.28 | 390000 | 0.1851 | 0.0493 |
| 0.0193 | 32.33 | 390500 | 0.1858 | 0.0497 |
| 0.0218 | 32.37 | 391000 | 0.1771 | 0.0487 |
| 0.017 | 32.41 | 391500 | 0.1844 | 0.0487 |
| 0.0195 | 32.45 | 392000 | 0.1789 | 0.0480 |
| 0.0194 | 32.49 | 392500 | 0.1781 | 0.0483 |
| 0.0136 | 32.53 | 393000 | 0.1807 | 0.0488 |
| 0.0191 | 32.57 | 393500 | 0.1805 | 0.0493 |
| 0.0156 | 32.62 | 394000 | 0.1852 | 0.0491 |
| 0.0156 | 32.66 | 394500 | 0.1862 | 0.0492 |
| 0.0182 | 32.7 | 395000 | 0.1900 | 0.0499 |
| 0.0158 | 32.74 | 395500 | 0.1926 | 0.0501 |
| 0.0195 | 32.78 | 396000 | 0.1905 | 0.0495 |
| 0.0196 | 32.82 | 396500 | 0.1840 | 0.0490 |
| 0.0169 | 32.86 | 397000 | 0.1846 | 0.0489 |
| 0.0187 | 32.91 | 397500 | 0.1859 | 0.0505 |
| 0.0204 | 32.95 | 398000 | 0.1896 | 0.0508 |
| 0.0189 | 32.99 | 398500 | 0.1873 | 0.0505 |
| 0.0191 | 33.03 | 399000 | 0.1903 | 0.0502 |
| 0.017 | 33.07 | 399500 | 0.1891 | 0.0497 |
| 0.0171 | 33.11 | 400000 | 0.1898 | 0.0495 |
| 0.0146 | 33.15 | 400500 | 0.1875 | 0.0507 |
| 0.014 | 33.2 | 401000 | 0.1858 | 0.0497 |
| 0.0176 | 33.24 | 401500 | 0.1860 | 0.0499 |
| 0.0212 | 33.28 | 402000 | 0.1867 | 0.0489 |
| 0.0149 | 33.32 | 402500 | 0.1839 | 0.0487 |
| 0.0169 | 33.36 | 403000 | 0.1830 | 0.0487 |
| 0.0189 | 33.4 | 403500 | 0.1844 | 0.0485 |
| 0.0194 | 33.44 | 404000 | 0.1865 | 0.0490 |
| 0.0184 | 33.49 | 404500 | 0.1848 | 0.0495 |
| 0.0185 | 33.53 | 405000 | 0.1838 | 0.0494 |
| 0.0184 | 33.57 | 405500 | 0.1834 | 0.0489 |
| 0.019 | 33.61 | 406000 | 0.1769 | 0.0482 |
| 0.0174 | 33.65 | 406500 | 0.1825 | 0.0482 |
| 0.0215 | 33.69 | 407000 | 0.1819 | 0.0485 |
| 0.0166 | 33.73 | 407500 | 0.1855 | 0.0491 |
| 0.0134 | 33.77 | 408000 | 0.1877 | 0.0482 |
| 0.0212 | 33.82 | 408500 | 0.1878 | 0.0495 |
| 0.0176 | 33.86 | 409000 | 0.1873 | 0.0491 |
| 0.0156 | 33.9 | 409500 | 0.1869 | 0.0483 |
| 0.013 | 33.94 | 410000 | 0.1863 | 0.0490 |
| 0.0182 | 33.98 | 410500 | 0.1900 | 0.0499 |
| 0.0173 | 34.02 | 411000 | 0.1875 | 0.0488 |
| 0.0152 | 34.06 | 411500 | 0.1894 | 0.0487 |
| 0.0158 | 34.11 | 412000 | 0.1868 | 0.0486 |
| 0.0144 | 34.15 | 412500 | 0.1908 | 0.0482 |
| 0.0198 | 34.19 | 413000 | 0.1874 | 0.0488 |
| 0.0146 | 34.23 | 413500 | 0.1941 | 0.0489 |
| 0.0186 | 34.27 | 414000 | 0.1819 | 0.0491 |
| 0.0168 | 34.31 | 414500 | 0.1873 | 0.0495 |
| 0.0152 | 34.35 | 415000 | 0.1933 | 0.0496 |
| 0.016 | 34.4 | 415500 | 0.1890 | 0.0487 |
| 0.0185 | 34.44 | 416000 | 0.1848 | 0.0485 |
| 0.0159 | 34.48 | 416500 | 0.1822 | 0.0478 |
| 0.0166 | 34.52 | 417000 | 0.1858 | 0.0484 |
| 0.0173 | 34.56 | 417500 | 0.1884 | 0.0488 |
| 0.0178 | 34.6 | 418000 | 0.1863 | 0.0478 |
| 0.0156 | 34.64 | 418500 | 0.1906 | 0.0482 |
| 0.0184 | 34.69 | 419000 | 0.1872 | 0.0485 |
| 0.015 | 34.73 | 419500 | 0.1829 | 0.0480 |
| 0.018 | 34.77 | 420000 | 0.1808 | 0.0479 |
| 0.0177 | 34.81 | 420500 | 0.1787 | 0.0481 |
| 0.0163 | 34.85 | 421000 | 0.1842 | 0.0490 |
| 0.0143 | 34.89 | 421500 | 0.1848 | 0.0488 |
| 0.0136 | 34.93 | 422000 | 0.1883 | 0.0489 |
| 0.0183 | 34.98 | 422500 | 0.1876 | 0.0486 |
| 0.017 | 35.02 | 423000 | 0.1900 | 0.0485 |
| 0.016 | 35.06 | 423500 | 0.1882 | 0.0490 |
| 0.0155 | 35.1 | 424000 | 0.1862 | 0.0485 |
| 0.0154 | 35.14 | 424500 | 0.1824 | 0.0483 |
| 0.0223 | 35.18 | 425000 | 0.1845 | 0.0487 |
| 0.016 | 35.22 | 425500 | 0.1870 | 0.0492 |
| 0.0126 | 35.26 | 426000 | 0.1873 | 0.0487 |
| 0.0143 | 35.31 | 426500 | 0.1858 | 0.0481 |
| 0.0147 | 35.35 | 427000 | 0.1861 | 0.0484 |
| 0.015 | 35.39 | 427500 | 0.1878 | 0.0486 |
| 0.0206 | 35.43 | 428000 | 0.1883 | 0.0495 |
| 0.0216 | 35.47 | 428500 | 0.1842 | 0.0479 |
| 0.0146 | 35.51 | 429000 | 0.1900 | 0.0489 |
| 0.0191 | 35.55 | 429500 | 0.1887 | 0.0482 |
| 0.0166 | 35.6 | 430000 | 0.1863 | 0.0480 |
| 0.0145 | 35.64 | 430500 | 0.1877 | 0.0478 |
| 0.0136 | 35.68 | 431000 | 0.1889 | 0.0478 |
| 0.0134 | 35.72 | 431500 | 0.1836 | 0.0477 |
| 0.0125 | 35.76 | 432000 | 0.1899 | 0.0480 |
| 0.0156 | 35.8 | 432500 | 0.1862 | 0.0480 |
| 0.0214 | 35.84 | 433000 | 0.1844 | 0.0481 |
| 0.0142 | 35.89 | 433500 | 0.1824 | 0.0471 |
| 0.0168 | 35.93 | 434000 | 0.1866 | 0.0476 |
| 0.0144 | 35.97 | 434500 | 0.1827 | 0.0475 |
| 0.0128 | 36.01 | 435000 | 0.1869 | 0.0482 |
| 0.0135 | 36.05 | 435500 | 0.1899 | 0.0486 |
| 0.0139 | 36.09 | 436000 | 0.1911 | 0.0484 |
| 0.0128 | 36.13 | 436500 | 0.1876 | 0.0482 |
| 0.0114 | 36.18 | 437000 | 0.1892 | 0.0487 |
| 0.0137 | 36.22 | 437500 | 0.1909 | 0.0483 |
| 0.0161 | 36.26 | 438000 | 0.1911 | 0.0483 |
| 0.0128 | 36.3 | 438500 | 0.1890 | 0.0480 |
| 0.0128 | 36.34 | 439000 | 0.1909 | 0.0479 |
| 0.0157 | 36.38 | 439500 | 0.1884 | 0.0481 |
| 0.0116 | 36.42 | 440000 | 0.1861 | 0.0479 |
| 0.0166 | 36.47 | 440500 | 0.1861 | 0.0480 |
| 0.013 | 36.51 | 441000 | 0.1914 | 0.0484 |
| 0.0154 | 36.55 | 441500 | 0.1932 | 0.0483 |
| 0.0156 | 36.59 | 442000 | 0.1916 | 0.0485 |
| 0.0162 | 36.63 | 442500 | 0.1932 | 0.0485 |
| 0.0137 | 36.67 | 443000 | 0.1915 | 0.0479 |
| 0.0177 | 36.71 | 443500 | 0.1901 | 0.0477 |
| 0.0161 | 36.75 | 444000 | 0.1894 | 0.0479 |
| 0.0151 | 36.8 | 444500 | 0.1907 | 0.0478 |
| 0.0135 | 36.84 | 445000 | 0.1912 | 0.0478 |
| 0.013 | 36.88 | 445500 | 0.1882 | 0.0478 |
| 0.0151 | 36.92 | 446000 | 0.1904 | 0.0476 |
| 0.0143 | 36.96 | 446500 | 0.1905 | 0.0475 |
| 0.0125 | 37.0 | 447000 | 0.1923 | 0.0481 |
| 0.0137 | 37.04 | 447500 | 0.1908 | 0.0477 |
| 0.0144 | 37.09 | 448000 | 0.1868 | 0.0478 |
| 0.0167 | 37.13 | 448500 | 0.1868 | 0.0478 |
| 0.0158 | 37.17 | 449000 | 0.1881 | 0.0477 |
| 0.0168 | 37.21 | 449500 | 0.1882 | 0.0479 |
| 0.0128 | 37.25 | 450000 | 0.1886 | 0.0478 |
| 0.0145 | 37.29 | 450500 | 0.1862 | 0.0477 |
| 0.016 | 37.33 | 451000 | 0.1883 | 0.0476 |
| 0.0132 | 37.38 | 451500 | 0.1872 | 0.0478 |
| 0.0165 | 37.42 | 452000 | 0.1874 | 0.0478 |
| 0.014 | 37.46 | 452500 | 0.1884 | 0.0480 |
| 0.0146 | 37.5 | 453000 | 0.1888 | 0.0478 |
| 0.0142 | 37.54 | 453500 | 0.1883 | 0.0480 |
| 0.0153 | 37.58 | 454000 | 0.1877 | 0.0476 |
| 0.0171 | 37.62 | 454500 | 0.1903 | 0.0480 |
| 0.013 | 37.67 | 455000 | 0.1934 | 0.0478 |
| 0.0135 | 37.71 | 455500 | 0.1896 | 0.0477 |
| 0.0151 | 37.75 | 456000 | 0.1911 | 0.0477 |
| 0.0159 | 37.79 | 456500 | 0.1903 | 0.0474 |
| 0.0151 | 37.83 | 457000 | 0.1927 | 0.0477 |
| 0.0128 | 37.87 | 457500 | 0.1940 | 0.0475 |
| 0.0154 | 37.91 | 458000 | 0.1929 | 0.0479 |
| 0.0119 | 37.96 | 458500 | 0.1913 | 0.0474 |
| 0.0141 | 38.0 | 459000 | 0.1881 | 0.0473 |
| 0.0135 | 38.04 | 459500 | 0.1907 | 0.0472 |
| 0.014 | 38.08 | 460000 | 0.1913 | 0.0476 |
| 0.0146 | 38.12 | 460500 | 0.1913 | 0.0476 |
| 0.0187 | 38.16 | 461000 | 0.1916 | 0.0474 |
| 0.0142 | 38.2 | 461500 | 0.1935 | 0.0475 |
| 0.0144 | 38.25 | 462000 | 0.1914 | 0.0473 |
| 0.0138 | 38.29 | 462500 | 0.1928 | 0.0475 |
| 0.0131 | 38.33 | 463000 | 0.1920 | 0.0473 |
| 0.013 | 38.37 | 463500 | 0.1904 | 0.0469 |
| 0.0139 | 38.41 | 464000 | 0.1918 | 0.0474 |
| 0.0135 | 38.45 | 464500 | 0.1923 | 0.0472 |
| 0.0149 | 38.49 | 465000 | 0.1920 | 0.0471 |
| 0.0133 | 38.53 | 465500 | 0.1905 | 0.0471 |
| 0.0147 | 38.58 | 466000 | 0.1911 | 0.0472 |
| 0.0161 | 38.62 | 466500 | 0.1919 | 0.0474 |
| 0.0174 | 38.66 | 467000 | 0.1918 | 0.0472 |
| 0.0123 | 38.7 | 467500 | 0.1916 | 0.0473 |
| 0.0143 | 38.74 | 468000 | 0.1915 | 0.0470 |
| 0.0112 | 38.78 | 468500 | 0.1903 | 0.0469 |
| 0.0126 | 38.82 | 469000 | 0.1923 | 0.0470 |
| 0.0138 | 38.87 | 469500 | 0.1929 | 0.0471 |
| 0.014 | 38.91 | 470000 | 0.1929 | 0.0472 |
| 0.0152 | 38.95 | 470500 | 0.1939 | 0.0471 |
| 0.0124 | 38.99 | 471000 | 0.1943 | 0.0471 |
| 0.0103 | 39.03 | 471500 | 0.1935 | 0.0470 |
| 0.0143 | 39.07 | 472000 | 0.1940 | 0.0470 |
| 0.0174 | 39.11 | 472500 | 0.1923 | 0.0471 |
| 0.0152 | 39.16 | 473000 | 0.1918 | 0.0472 |
| 0.0153 | 39.2 | 473500 | 0.1909 | 0.0470 |
| 0.0161 | 39.24 | 474000 | 0.1913 | 0.0470 |
| 0.0133 | 39.28 | 474500 | 0.1913 | 0.0470 |
| 0.0126 | 39.32 | 475000 | 0.1912 | 0.0468 |
| 0.0162 | 39.36 | 475500 | 0.1914 | 0.0467 |
| 0.0134 | 39.4 | 476000 | 0.1906 | 0.0467 |
| 0.013 | 39.45 | 476500 | 0.1905 | 0.0468 |
| 0.016 | 39.49 | 477000 | 0.1911 | 0.0468 |
| 0.0149 | 39.53 | 477500 | 0.1912 | 0.0467 |
| 0.0132 | 39.57 | 478000 | 0.1917 | 0.0467 |
| 0.0289 | 39.61 | 478500 | 0.1916 | 0.0467 |
| 0.0129 | 39.65 | 479000 | 0.1916 | 0.0468 |
| 0.0122 | 39.69 | 479500 | 0.1916 | 0.0467 |
| 0.0159 | 39.74 | 480000 | 0.1911 | 0.0467 |
| 0.0126 | 39.78 | 480500 | 0.1915 | 0.0468 |
| 0.0164 | 39.82 | 481000 | 0.1915 | 0.0468 |
| 0.0139 | 39.86 | 481500 | 0.1916 | 0.0469 |
| 0.0122 | 39.9 | 482000 | 0.1919 | 0.0469 |
| 0.0154 | 39.94 | 482500 | 0.1921 | 0.0469 |
| 0.0124 | 39.98 | 483000 | 0.1921 | 0.0469 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| a36b6ebf1c49c06b2d98e88029dee2aa |
DeividasM/gpt2_lithuanian_small | DeividasM | gpt2 | 10 | 54 | transformers | 1 | text-generation | false | true | false | apache-2.0 | ['lt'] | ['wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-generation'] | false | true | true | 2,332 | false | ## Model description

GPT-2 model from Lithuania using Wikipedia corpus dataset based on GPT-2 small model.
This is only the first version of the model; over time model will be improved using a more extensive dataset and better data preparation.
## Training data
This model was pre-trained with 180MB of Lithuanian Wikipedia. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE).
## Training
The model was trained on wiki-corpus for 40 hours using NVIDIA Tesla P100 GPU.
### How to use
### Load model
```
from transformers import AutoTokenizer, TFAutoModelWithLMHead
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("DeividasM/gpt2_lithuanian_small")
model = TFAutoModelWithLMHead.from_pretrained("DeividasM/gpt2_lithuanian_small")
# Get sequence length max of 1024
tokenizer.model_max_length=1024
model.eval()
```
## Generate text
```
text = "tekstas "
inputs = tokenizer.encode(text, return_tensors="tf")
outputs = model.generate(inputs, eos_token_id=50256, pad_token_id=50256,
do_sample=True,
max_length=40,
top_k=40)
print(tokenizer.decode(outputs[0]))
```
## Limitations and bias
The training data used for this model come from Lithuanian Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the OpenAI team themselves point out in their model card:
"Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes."
## Author
Lithuanian GPT-2 small was trained and evaluated by Deividas Mataciunas (https://www.linkedin.com/in/deividasmataciunas/)
| 8f633b7ef1b78396b4f34c783b7fb378 |
hatemestinbejaia/legalbert-adept | hatemestinbejaia | bert | 40 | 50 | transformers | 0 | fill-mask | true | false | false | cc-by-sa-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,787 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalbert-adept
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 70.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 5.4774 | 1.0 | 907 | 4.6352 |
| 4.5985 | 2.0 | 1814 | 4.2252 |
| 4.2598 | 3.0 | 2721 | 3.9970 |
| 4.0564 | 4.0 | 3628 | 3.8458 |
| 3.852 | 5.0 | 4535 | 3.6996 |
| 3.7954 | 6.0 | 5442 | 3.5729 |
| 3.6572 | 7.0 | 6349 | 3.4669 |
| 3.5174 | 8.0 | 7256 | 3.3176 |
| 3.3779 | 9.0 | 8163 | 3.1742 |
| 3.2451 | 10.0 | 9070 | 3.1204 |
| 3.1785 | 11.0 | 9977 | 3.0070 |
| 3.0627 | 12.0 | 10884 | 2.9171 |
| 2.9859 | 13.0 | 11791 | 2.8068 |
| 2.8921 | 14.0 | 12698 | 2.7104 |
| 2.7894 | 15.0 | 13605 | 2.6986 |
| 2.754 | 16.0 | 14512 | 2.6349 |
| 2.6242 | 17.0 | 15419 | 2.5321 |
| 2.6069 | 18.0 | 16326 | 2.5110 |
| 2.5147 | 19.0 | 17233 | 2.4618 |
| 2.4694 | 20.0 | 18140 | 2.3947 |
| 2.4267 | 21.0 | 19047 | 2.3827 |
| 2.3936 | 22.0 | 19954 | 2.3171 |
| 2.3613 | 23.0 | 20861 | 2.2848 |
| 2.2855 | 24.0 | 21768 | 2.2050 |
| 2.2256 | 25.0 | 22675 | 2.1967 |
| 2.2242 | 26.0 | 23582 | 2.1683 |
| 2.1924 | 27.0 | 24489 | 2.1475 |
| 2.136 | 28.0 | 25396 | 2.1203 |
| 2.0947 | 29.0 | 26303 | 2.0854 |
| 2.1093 | 30.0 | 27210 | 2.0813 |
| 2.0255 | 31.0 | 28117 | 2.0102 |
| 1.9977 | 32.0 | 29024 | 2.0168 |
| 1.9815 | 33.0 | 29931 | 2.0015 |
| 1.9804 | 34.0 | 30838 | 1.9795 |
| 1.9459 | 35.0 | 31745 | 1.9581 |
| 1.9032 | 36.0 | 32652 | 1.9227 |
| 1.8959 | 37.0 | 33559 | 1.9146 |
| 1.9449 | 38.0 | 34466 | 1.8836 |
| 1.8673 | 39.0 | 35373 | 1.9147 |
| 1.8379 | 40.0 | 36280 | 1.9020 |
| 1.8424 | 41.0 | 37187 | 1.8786 |
| 1.8173 | 42.0 | 38094 | 1.8736 |
| 1.8092 | 43.0 | 39001 | 1.8398 |
| 1.7937 | 44.0 | 39908 | 1.8393 |
| 1.7844 | 45.0 | 40815 | 1.7940 |
| 1.7868 | 46.0 | 41722 | 1.8064 |
| 1.7554 | 47.0 | 42629 | 1.7834 |
| 1.7161 | 48.0 | 43536 | 1.7966 |
| 1.7715 | 49.0 | 44443 | 1.8080 |
| 1.7177 | 50.0 | 45350 | 1.7561 |
| 1.6985 | 51.0 | 46257 | 1.7451 |
| 1.7119 | 52.0 | 47164 | 1.7476 |
| 1.6712 | 53.0 | 48071 | 1.7359 |
| 1.6765 | 54.0 | 48978 | 1.7663 |
| 1.6749 | 55.0 | 49885 | 1.7227 |
| 1.6639 | 56.0 | 50792 | 1.7032 |
| 1.6363 | 57.0 | 51699 | 1.7090 |
| 1.6378 | 58.0 | 52606 | 1.7037 |
| 1.6237 | 59.0 | 53513 | 1.7047 |
| 1.6311 | 60.0 | 54420 | 1.7031 |
| 1.592 | 61.0 | 55327 | 1.7099 |
| 1.6111 | 62.0 | 56234 | 1.6824 |
| 1.6026 | 63.0 | 57141 | 1.6669 |
| 1.6252 | 64.0 | 58048 | 1.6886 |
| 1.6184 | 65.0 | 58955 | 1.6742 |
| 1.6088 | 66.0 | 59862 | 1.7186 |
| 1.6246 | 67.0 | 60769 | 1.6937 |
| 1.5948 | 68.0 | 61676 | 1.6868 |
| 1.5951 | 69.0 | 62583 | 1.7186 |
| 1.5775 | 70.0 | 63490 | 1.6775 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 8f0dd1657974be211dcc5ef121a50d4e |
bosvath/finetuning-sentiment-model-12000-samples | bosvath | distilbert | 10 | 9 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,047 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6165
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 2.6.1
- Tokenizers 0.13.1
| 216c0aeee09eabe0a687824c800fdf55 |
xzhang/distilgpt2-finetuned-spam | xzhang | gpt2 | 9 | 4 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,238 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-spam
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 99 | 5.3140 |
| No log | 2.0 | 198 | 5.1952 |
| No log | 3.0 | 297 | 5.1656 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 69d1bd5a2dc6168388c26643b6900ecd |
dchaplinsky/punctuation_uk_bert | dchaplinsky | null | 6 | 0 | generic | 5 | text2text-generation | false | false | false | mit | ['uk'] | ['ubertext2.0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text2text-generation', 'punctuation prediction', 'punctuation'] | false | true | true | 487 | false |
# Ukrainian model to restore punctuation and capitalization
This is the NeMo model to restore punctuation and capitalization in sentences, trained on 10m+ sentences from UberText 2.0 corpus (yet unreleased). Basic transformer under the hood is `bert-base-multilingual-cased`.
Model restores the following punctuations -- [? . ,].
It also restores capitalization of words.
Copyright: [Dmytro Chaplynskyi](https://twitter.com/dchaplinsky), [lang-uk](https://lang.org.ua) project, 2022 | dad94237fd7669f3ceb68c7f84206de4 |
muhtasham/bert-small-finetuned-glue-rte | muhtasham | bert | 14 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,045 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-glue-rte
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8715
- Accuracy: 0.6318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 2.62 | 50 | 1.8285 | 0.6318 |
| No log | 5.26 | 100 | 2.0806 | 0.6462 |
| No log | 7.87 | 150 | 2.1598 | 0.6282 |
| No log | 10.51 | 200 | 2.2774 | 0.6318 |
| No log | 13.15 | 250 | 2.3676 | 0.6245 |
| No log | 15.77 | 300 | 2.4581 | 0.6462 |
| No log | 18.41 | 350 | 2.6175 | 0.6354 |
| No log | 21.05 | 400 | 2.6697 | 0.6354 |
| No log | 23.67 | 450 | 2.7717 | 0.6354 |
| 0.0101 | 26.31 | 500 | 2.7975 | 0.6462 |
| 0.0101 | 28.92 | 550 | 2.8532 | 0.6390 |
| 0.0101 | 31.56 | 600 | 2.9054 | 0.6209 |
| 0.0101 | 34.21 | 650 | 2.8715 | 0.6318 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 0ad219eb6584d286a07f72f08b3bb1b5 |
jonatasgrosman/exp_w2v2t_id_vp-fr_s222 | jonatasgrosman | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['id'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'id'] | false | true | true | 469 | false | # exp_w2v2t_id_vp-fr_s222
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| fd34881168556734ee6c9277d7ac1d6c |
SetFit/deberta-v3-large__sst2__train-16-6 | SetFit | deberta-v2 | 10 | 5 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,888 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-6
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6846
- Accuracy: 0.5058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6673 | 1.0 | 7 | 0.7580 | 0.2857 |
| 0.5896 | 2.0 | 14 | 0.7885 | 0.5714 |
| 0.5294 | 3.0 | 21 | 1.0040 | 0.4286 |
| 0.3163 | 4.0 | 28 | 1.1761 | 0.5714 |
| 0.1315 | 5.0 | 35 | 1.4315 | 0.4286 |
| 0.0312 | 6.0 | 42 | 2.6115 | 0.2857 |
| 0.1774 | 7.0 | 49 | 2.1631 | 0.5714 |
| 0.0052 | 8.0 | 56 | 2.3838 | 0.4286 |
| 0.0043 | 9.0 | 63 | 2.6553 | 0.4286 |
| 0.0032 | 10.0 | 70 | 2.2774 | 0.4286 |
| 0.0015 | 11.0 | 77 | 1.9467 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 5741cf8280252f148aeb2778de75d75d |
pinot/wav2vec2-large-xls-r-300m-ja-colab-new | pinot | wav2vec2 | 15 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice_10_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,914 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ja-colab-new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1931
- Wer: 0.2584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 637 | 5.3089 | 0.9670 |
| No log | 2.0 | 1274 | 3.2716 | 0.6123 |
| No log | 3.0 | 1911 | 2.1797 | 0.4708 |
| No log | 4.0 | 2548 | 1.8331 | 0.4113 |
| 6.3938 | 5.0 | 3185 | 1.5111 | 0.3460 |
| 6.3938 | 6.0 | 3822 | 1.3575 | 0.3132 |
| 6.3938 | 7.0 | 4459 | 1.2946 | 0.2957 |
| 6.3938 | 8.0 | 5096 | 1.2346 | 0.2762 |
| 1.023 | 9.0 | 5733 | 1.2053 | 0.2653 |
| 1.023 | 10.0 | 6370 | 1.1931 | 0.2584 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| beafa1cf911f10b43a3ec5228fe534f4 |
Petros89/bert-finetuned-ner | Petros89 | bert | 14 | 3 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,511 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9320
- Recall: 0.9487
- F1: 0.9403
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0889 | 1.0 | 1756 | 0.0748 | 0.9060 | 0.9263 | 0.9160 | 0.9800 |
| 0.0381 | 2.0 | 3512 | 0.0631 | 0.9296 | 0.9468 | 0.9381 | 0.9855 |
| 0.0205 | 3.0 | 5268 | 0.0611 | 0.9320 | 0.9487 | 0.9403 | 0.9861 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.7.0
- Datasets 2.4.0
- Tokenizers 0.12.1
| 712fb12f7ba6a7807a32794e6bda9c6a |
jonatasgrosman/exp_w2v2t_et_unispeech-sat_s364 | jonatasgrosman | unispeech-sat | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['et'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'et'] | false | true | true | 463 | false | # exp_w2v2t_et_unispeech-sat_s364
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 89d2bd608875d3af6b3ddd2f7d086b0e |
Ekkel-AI-Pvt-ltd/stable-diffusion-custom | Ekkel-AI-Pvt-ltd | null | 19 | 159 | diffusers | 0 | null | false | false | false | creativeml-openrail-m | null | null | null | 2 | 1 | 1 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,700 | false |
### MODEL_CONFIG_DDIM_TRAIN
```
MODEL_CONFIG_DDIM = {
"_class_name": "StableDiffusionPipeline",
"_diffusers_version": "0.6.0",
"scheduler": [
"diffusers",
"DDIMScheduler"
],
"text_encoder": [
"transformers",
"CLIPTextModel"
],
"tokenizer": [
"transformers",
"CLIPTokenizer"
],
"unet": [
"diffusers",
"UNet2DConditionModel"
],
"vae": [
"diffusers",
"AutoencoderKL"
]
}
```
### MODEL_CONFIG_DDIM_SAVE
```
MODEL_CONFIG_DDIM_SAVE = {
"_class_name": "StableDiffusionPipeline",
"_diffusers_version": "0.9.0.dev0",
"scheduler": [
"diffusers",
"DDIMScheduler"
],
"text_encoder": [
"transformers",
"CLIPTextModel"
],
"tokenizer": [
"transformers",
"CLIPTokenizer"
],
"unet": [
"diffusers",
"UNet2DConditionModel"
],
"vae": [
"diffusers",
"AutoencoderKL"
]
}
```
### SCHEDULER_CONFIG_DDIM_TRAIN
```
SCHEDULER_CONFIG_DDIM_TRAIN = {
"_class_name": "DDIMScheduler",
"_diffusers_version": "0.6.0",
"beta_end": 0.012,
"beta_schedule": "scaled_linear",
"beta_start": 0.00085,
"clip_sample": false,
"num_train_timesteps": 1000,
"set_alpha_to_one": false,
"skip_prk_steps": true,
"steps_offset": 1,
"trained_betas": null
}
```
### SCHEDULER_CONFIG_DDIM_SAVE
```
SCHEDULER_CONFIG_DDIM_SAVE = {
"_class_name": "DDIMScheduler",
"_diffusers_version": "0.9.0.dev0",
"beta_end": 0.012,
"beta_schedule": "scaled_linear",
"beta_start": 0.00085,
"clip_sample": false,
"num_train_timesteps": 1000,
"prediction_type": "epsilon",
"set_alpha_to_one": false,
"skip_prk_steps": true,
"steps_offset": 1,
"trained_betas": null
}
``` | 0cc71eeca50ff25a6ccd16454d16ac80 |
Subsets and Splits