modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
aspis/data2vec-text-finetuned-squad2 | 168c261c73222cb3e1988c9f2c8ff0b7f5b2cd1b | 2022-07-05T20:03:52.000Z | [
"pytorch",
"tensorboard",
"data2vec-text",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | aspis | null | aspis/data2vec-text-finetuned-squad2 | 22 | null | transformers | 8,100 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: data2vec-text-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-finetuned-squad2
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0173 | 1.0 | 8239 | 0.9629 |
| 0.7861 | 2.0 | 16478 | 1.0098 |
| 0.6402 | 3.0 | 24717 | 1.1044 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
akhisreelibra/mt5-small-finetuned-amazon-en-es | c000dd436ace73d04608af84d3d53ff8af1f6e2a | 2022-07-05T16:04:31.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | akhisreelibra | null | akhisreelibra/mt5-small-finetuned-amazon-en-es | 22 | null | transformers | 8,101 | |
PrimeQA/tapas-based-tableqa-wikisql-lookup | 6fdf82954f7b8ec19079ec525809be1966c0dd70 | 2022-07-09T18:28:41.000Z | [
"pytorch",
"tapas",
"table-question-answering",
"arxiv:2004.02349",
"transformers",
"license:apache-2.0"
] | table-question-answering | false | PrimeQA | null | PrimeQA/tapas-based-tableqa-wikisql-lookup | 22 | null | transformers | 8,102 | ---
license: apache-2.0
---
# Model description
This is an [tapas-base](https://huggingface.co/google/tapas-base) model, trained on the lookup queries of [wikisql](https://huggingface.co/datasets/wikisql) dataset. It was trained to take tables and questions as input to extract answers from the table.
# Overview
*Language model*: tapas-base \
*Language*: English\
*Task*: Table Question Answering \
*Data*: WikiSQL
# Intented use and limitations
One can use this model to predict answers for natural language queries given a table. Biases associated with pre-training of tapas-base and wikisql dataset may be present.
## Usage
One can use this model directly in the [PrimeQA](https://github.com/primeqa/primeqa) framework as in this example [notebook](https://github.com/primeqa/primeqa/blob/tableqa_tapas/notebooks/tableqa/tableqa_inference.ipynb).
## Citation
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
``` |
KevinChoi/dpr-question_encoder-klue-roberta-base | b3ee4d28023018d06a09a9b5106a63f7d46180f0 | 2022-07-06T03:52:47.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KevinChoi | null | KevinChoi/dpr-question_encoder-klue-roberta-base | 22 | null | transformers | 8,103 | Entry not found |
sgugger/test-dynamic-pipeline | d895845216b0915a68d360931b2b635fa6276f1f | 2022-07-06T22:23:14.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sgugger | null | sgugger/test-dynamic-pipeline | 22 | null | transformers | 8,104 | Entry not found |
Manishkalra/discourse_classification | 0e4021b553e6c213a6c593baa3732199675bfc9a | 2022-07-20T09:48:11.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Manishkalra | null | Manishkalra/discourse_classification | 22 | null | transformers | 8,105 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: discourse_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# discourse_classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7639
- Accuracy: 0.6649
- F1: 0.6649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7565 | 1.0 | 1839 | 0.7589 | 0.6635 | 0.6635 |
| 0.6693 | 2.0 | 3678 | 0.7639 | 0.6649 | 0.6649 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
NAACL2022/spider-nq-ctx-encoder | 421284fcf5d863bfe4408d2bd55bbdc263892ae3 | 2022-07-09T19:20:32.000Z | [
"pytorch",
"dpr",
"arxiv:2112.07708",
"transformers"
] | null | false | NAACL2022 | null | NAACL2022/spider-nq-ctx-encoder | 22 | 4 | transformers | 8,106 | # Spider-NQ: Context Encoder
This is the context encoder of the model fine-tuned on Natural Questions (and initialized from Spider) discussed in our paper [Learning to Retrieve Passages without Supervision](https://arxiv.org/abs/2112.07708).
## Usage
We used weight sharing for the query encoder and passage encoder, so the same model should be applied for both.
**Note**! We format the passages similar to DPR, i.e. the title and the text are separated by a `[SEP]` token, but token
type ids are all 0-s.
An example usage:
```python
from transformers import AutoTokenizer, DPRContextEncoder
tokenizer = AutoTokenizer.from_pretrained("NAACL2022/spider-nq-ctx-encoder")
model = DPRContextEncoder.from_pretrained("NAACL2022/spider-nq-ctx-encoder")
title = "Sauron"
context = "Sauron is the title character and main antagonist of J. R. R. Tolkien's \"The Lord of the Rings\"."
input_dict = tokenizer(title, context, return_tensors="pt")
del input_dict["token_type_ids"]
outputs = model(**input_dict)
```
|
p2o/neuralmind-bert-base-portuguese-squad | d074a6cd5d1cb50e1e84ac887fb0e7181f518a79 | 2022-07-09T20:01:53.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | p2o | null | p2o/neuralmind-bert-base-portuguese-squad | 22 | null | transformers | 8,107 | Entry not found |
sssingh/distilbert-base-uncased-emotion-finetuned | e5ccaeddda7b7983c122f9085cdc7edd4bea05c7 | 2022-07-16T08:15:11.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | sssingh | null | sssingh/distilbert-base-uncased-emotion-finetuned | 22 | null | transformers | 8,108 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: distilbert-base-uncased-emotion-finetuned
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.9350215566385567
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-emotion-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1518
- Acc: 0.935
- F1: 0.9350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:------:|
| 0.1734 | 1.0 | 250 | 0.1624 | 0.928 | 0.9279 |
| 0.1187 | 2.0 | 500 | 0.1518 | 0.935 | 0.9350 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Hamzaaa/wav2vec2-base-finetuned-trained-3-languages | 715134f2a40b8a6caf06b54f86b5b0d3f8b9204e | 2022-07-11T16:38:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | Hamzaaa | null | Hamzaaa/wav2vec2-base-finetuned-trained-3-languages | 22 | null | transformers | 8,109 | Entry not found |
srini98/distilbert_finetuned-clinc | 80bb3493ef2d30af920e6197400f4f965497bc99 | 2022-07-13T15:05:53.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | srini98 | null | srini98/distilbert_finetuned-clinc | 22 | null | transformers | 8,110 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert_finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7799
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2788 | 0.7371 |
| 3.7785 | 2.0 | 636 | 1.8739 | 0.8358 |
| 3.7785 | 3.0 | 954 | 1.1618 | 0.8923 |
| 1.6926 | 4.0 | 1272 | 0.8647 | 0.9090 |
| 0.9104 | 5.0 | 1590 | 0.7799 | 0.9161 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.11.6
|
jhonparra18/bert-base-uncased-cv-position-classifier | c446350fd81416e2481c4b7c2a8c9e728ebc7647 | 2022-07-13T18:10:30.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jhonparra18 | null | jhonparra18/bert-base-uncased-cv-position-classifier | 22 | null | transformers | 8,111 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
model-index:
- name: bert-base-uncased-cv-position-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-cv-position-classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6924
- Accuracy: {'accuracy': 0.5780703216130645}
- F1: {'f1': 0.5780703216130645}
- Precision: {'precision': 0.5780703216130645}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------------:|:---------------------------------:|
| 2.0336 | 1.14 | 1000 | 1.8856 | {'accuracy': 0.5259123479420097} | {'f1': 0.5259123479420097} | {'precision': 0.5259123479420097} |
| 1.5348 | 2.28 | 2000 | 1.6924 | {'accuracy': 0.5780703216130645} | {'f1': 0.5780703216130645} | {'precision': 0.5780703216130645} |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.8.1+cu111
- Datasets 1.6.2
- Tokenizers 0.12.1
|
NimaBoscarino/efficientformer-l1-1000 | 4d215f01f9ec95e56bc7fb8224634f61e41a5873 | 2022-07-18T20:14:47.000Z | [
"pytorch",
"en",
"dataset:imagenet-1k",
"arxiv:2206.01191",
"timm",
"mobile",
"vison",
"image-classification",
"license:apache-2.0"
] | image-classification | false | NimaBoscarino | null | NimaBoscarino/efficientformer-l1-1000 | 22 | null | timm | 8,112 | ---
language:
- en
license: apache-2.0
library_name: timm
tags:
- mobile
- vison
- image-classification
datasets:
- imagenet-1k
metrics:
- accuracy
---
# EfficientFormer-L1
## Table of Contents
- [EfficientFormer-L1](#-model_id--defaultmymodelname-true)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use](#downstream-use)
- [Misuse and Out-of-scope Use](#misuse-and-out-of-scope-use)
- [Limitations and Biases](#limitations-and-biases)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation Results](#evaluation-results)
- [Environmental Impact](#environmental-impact)
- [Citation Information](#citation-information)
<model_details>
## Model Details
<!-- Give an overview of your model, the relevant research paper, who trained it, etc. -->
EfficientFormer-L1, developed by [Snap Research](https://github.com/snap-research), is one of three EfficientFormer models. The EfficientFormer models were released as part of an effort to prove that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance.
This checkpoint of EfficientFormer-L1 was trained for 1000 epochs.
- Developed by: Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren
- Language(s): English
- License: This model is licensed under the apache-2.0 license
- Resources for more information:
- [Research Paper](https://arxiv.org/abs/2206.01191)
- [GitHub Repo](https://github.com/snap-research/EfficientFormer/)
</model_details>
<how_to_start>
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# A nice code snippet here that describes how to use the model...
```
</how_to_start>
<uses>
## Uses
#### Direct Use
This model can be used for image classification and semantic segmentation. On mobile devices (the model was tested on iPhone 12), the CoreML checkpoints will perform these tasks with low latency.
<Limitations_and_Biases>
## Limitations and Biases
Though most designs in EfficientFormer are general-purposed, e.g., dimension- consistent design and 4D block with CONV-BN fusion, the actual speed of EfficientFormer may vary on other platforms. For instance, if GeLU is not well supported while HardSwish is efficiently implemented on specific hardware and compiler, the operator may need to be modified accordingly. The proposed latency-driven slimming is simple and fast. However, better results may be achieved if search cost is not a concern and an enumeration-based brute search is performed.
Since the model was trained on Imagenet-1K, the [biases embedded in that dataset](https://huggingface.co/datasets/imagenet-1k#considerations-for-using-the-data) will be reflected in the EfficientFormer models.
</Limitations_and_Biases>
<Training>
## Training
#### Training Data
This model was trained on ImageNet-1K.
See the [data card](https://huggingface.co/datasets/imagenet-1k) for additional information.
#### Training Procedure
* Parameters: 12.3 M
* GMACs: 1.3
* Train. Epochs: 1000
Trained on a cluster with NVIDIA A100 and V100 GPUs.
</Training>
<Eval_Results>
## Evaluation Results
Top-1 Accuracy: 80.2% on ImageNet 10K
</Eval_Results>
<Cite>
## Citation Information
```bibtex
@article{li2022efficientformer,
title={EfficientFormer: Vision Transformers at MobileNet Speed},
author={Li, Yanyu and Yuan, Geng and Wen, Yang and Hu, Eric and Evangelidis, Georgios and Tulyakov, Sergey and Wang, Yanzhi and Ren, Jian},
journal={arXiv preprint arXiv:2206.01191},
year={2022}
}
```
</Cite> |
Team-PIXEL/pixel-base-finetuned-conll2003-en | 3dd353a6df09737aff194d2b36dbec67218a4b3b | 2022-07-15T03:12:42.000Z | [
"pytorch",
"pixel",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-conll2003-en | 22 | null | transformers | 8,113 | Entry not found |
aalbertini1990/autotrain-first-test-html-1136241677 | e19b7e4367b2d20f7ca4525c490a74ff7f6d7aa0 | 2022-07-16T21:16:30.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:aalbertini1990/autotrain-data-first-test-html",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | aalbertini1990 | null | aalbertini1990/autotrain-first-test-html-1136241677 | 22 | null | transformers | 8,114 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aalbertini1990/autotrain-data-first-test-html
co2_eq_emissions: 19.49742293318862
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1136241677
- CO2 Emissions (in grams): 19.49742293318862
## Validation Metrics
- Loss: 0.18860992789268494
- Rouge1: 84.2283
- Rouge2: 80.2825
- RougeL: 83.9066
- RougeLsum: 83.9129
- Gen Len: 58.3175
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/aalbertini1990/autotrain-first-test-html-1136241677
``` |
koushikn/segformer-finetuned-Maize-10k-steps-sem | 3bedd986d2e9e3a7d6f4eacf9cacd102e1dbbcf2 | 2022-07-17T12:52:45.000Z | [
"pytorch",
"tensorboard",
"segformer",
"transformers",
"image-segmentation",
"vision",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-segmentation | false | koushikn | null | koushikn/segformer-finetuned-Maize-10k-steps-sem | 22 | null | transformers | 8,115 | ---
license: apache-2.0
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-finetuned-Maize-10k-steps-sem
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-Maize-10k-steps-sem
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the koushikn/Maize_sem_seg dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0756
- Mean Iou: 0.9172
- Mean Accuracy: 0.9711
- Overall Accuracy: 0.9804
- Accuracy Background: 0.9834
- Accuracy Maize: 0.9588
- Iou Background: 0.9779
- Iou Maize: 0.8566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Maize | Iou Background | Iou Maize |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:--------------:|:--------------:|:---------:|
| 0.0529 | 1.0 | 678 | 69.3785 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.3755 | 2.0 | 1356 | 0.9455 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0603 | 3.0 | 2034 | 0.0920 | 0.8356 | 0.8602 | 0.9641 | 0.9976 | 0.7227 | 0.9607 | 0.7106 |
| 0.0341 | 4.0 | 2712 | 24.6203 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0332 | 5.0 | 3390 | 101.5635 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0331 | 6.0 | 4068 | 9.6824 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0302 | 7.0 | 4746 | 260.7923 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0305 | 8.0 | 5424 | 172.8153 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0313 | 9.0 | 6102 | 304.2714 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0301 | 10.0 | 6780 | 547.2355 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.03 | 11.0 | 7458 | 224.2607 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0285 | 12.0 | 8136 | 116.3474 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0284 | 13.0 | 8814 | 96.8429 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.0281 | 14.0 | 9492 | 54.2593 | 0.4391 | 0.5 | 0.8781 | 1.0 | 0.0 | 0.8781 | 0.0 |
| 0.028 | 14.75 | 10000 | 0.0756 | 0.9172 | 0.9711 | 0.9804 | 0.9834 | 0.9588 | 0.9779 | 0.8566 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kabelomalapane/En-Nso_update | 347016caba50c0d083d7ea198605bd9a3d61e348 | 2022-07-19T12:44:05.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | kabelomalapane | null | kabelomalapane/En-Nso_update | 22 | null | transformers | 8,116 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Nso_update
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Nso_update
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-nso](https://huggingface.co/Helsinki-NLP/opus-mt-en-nso) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8782
- Bleu: 31.2967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 4 | 7.2950 | 0.0088 |
| No log | 2.0 | 8 | 5.9614 | 0.6848 |
| No log | 3.0 | 12 | 5.0695 | 4.9050 |
| No log | 4.0 | 16 | 4.5523 | 9.1757 |
| No log | 5.0 | 20 | 4.2355 | 10.4744 |
| No log | 6.0 | 24 | 4.0106 | 14.6163 |
| No log | 7.0 | 28 | 3.8427 | 15.8379 |
| No log | 8.0 | 32 | 3.7264 | 15.6158 |
| No log | 9.0 | 36 | 3.6338 | 16.3562 |
| No log | 10.0 | 40 | 3.5555 | 21.1011 |
| No log | 11.0 | 44 | 3.4839 | 21.5754 |
| No log | 12.0 | 48 | 3.4180 | 22.7155 |
| No log | 13.0 | 52 | 3.3620 | 23.1592 |
| No log | 14.0 | 56 | 3.3115 | 24.3886 |
| No log | 15.0 | 60 | 3.2676 | 24.1278 |
| No log | 16.0 | 64 | 3.2285 | 24.2245 |
| No log | 17.0 | 68 | 3.1974 | 23.9716 |
| No log | 18.0 | 72 | 3.1695 | 24.2395 |
| No log | 19.0 | 76 | 3.1441 | 23.3442 |
| No log | 20.0 | 80 | 3.1235 | 21.3332 |
| No log | 21.0 | 84 | 3.1029 | 21.8410 |
| No log | 22.0 | 88 | 3.0849 | 22.4065 |
| No log | 23.0 | 92 | 3.0666 | 22.3016 |
| No log | 24.0 | 96 | 3.0534 | 22.9616 |
| No log | 25.0 | 100 | 3.0423 | 23.3971 |
| No log | 26.0 | 104 | 3.0306 | 23.5443 |
| No log | 27.0 | 108 | 3.0183 | 23.3348 |
| No log | 28.0 | 112 | 3.0051 | 23.4077 |
| No log | 29.0 | 116 | 2.9947 | 24.1791 |
| No log | 30.0 | 120 | 2.9855 | 24.1265 |
| No log | 31.0 | 124 | 2.9777 | 23.9860 |
| No log | 32.0 | 128 | 2.9691 | 24.7301 |
| No log | 33.0 | 132 | 2.9597 | 25.1896 |
| No log | 34.0 | 136 | 2.9521 | 24.5893 |
| No log | 35.0 | 140 | 2.9457 | 24.5229 |
| No log | 36.0 | 144 | 2.9409 | 24.6232 |
| No log | 37.0 | 148 | 2.9354 | 24.2830 |
| No log | 38.0 | 152 | 2.9322 | 26.1404 |
| No log | 39.0 | 156 | 2.9306 | 25.9425 |
| No log | 40.0 | 160 | 2.9288 | 30.5432 |
| No log | 41.0 | 164 | 2.9261 | 29.4635 |
| No log | 42.0 | 168 | 2.9215 | 28.4787 |
| No log | 43.0 | 172 | 2.9182 | 28.9082 |
| No log | 44.0 | 176 | 2.9151 | 29.3171 |
| No log | 45.0 | 180 | 2.9132 | 28.3602 |
| No log | 46.0 | 184 | 2.9126 | 28.9583 |
| No log | 47.0 | 188 | 2.9104 | 26.0269 |
| No log | 48.0 | 192 | 2.9086 | 29.6904 |
| No log | 49.0 | 196 | 2.9052 | 29.2881 |
| No log | 50.0 | 200 | 2.9020 | 29.6063 |
| No log | 51.0 | 204 | 2.8994 | 29.5224 |
| No log | 52.0 | 208 | 2.8960 | 29.3913 |
| No log | 53.0 | 212 | 2.8930 | 30.5451 |
| No log | 54.0 | 216 | 2.8889 | 32.1862 |
| No log | 55.0 | 220 | 2.8869 | 31.9423 |
| No log | 56.0 | 224 | 2.8859 | 30.7244 |
| No log | 57.0 | 228 | 2.8846 | 30.8172 |
| No log | 58.0 | 232 | 2.8837 | 30.5376 |
| No log | 59.0 | 236 | 2.8826 | 31.1454 |
| No log | 60.0 | 240 | 2.8813 | 30.9049 |
| No log | 61.0 | 244 | 2.8802 | 30.6363 |
| No log | 62.0 | 248 | 2.8802 | 31.3739 |
| No log | 63.0 | 252 | 2.8799 | 30.9776 |
| No log | 64.0 | 256 | 2.8793 | 29.8283 |
| No log | 65.0 | 260 | 2.8795 | 29.6912 |
| No log | 66.0 | 264 | 2.8804 | 29.7654 |
| No log | 67.0 | 268 | 2.8810 | 29.1586 |
| No log | 68.0 | 272 | 2.8822 | 28.8888 |
| No log | 69.0 | 276 | 2.8819 | 29.7222 |
| No log | 70.0 | 280 | 2.8810 | 29.9932 |
| No log | 71.0 | 284 | 2.8811 | 30.2492 |
| No log | 72.0 | 288 | 2.8802 | 29.9644 |
| No log | 73.0 | 292 | 2.8791 | 30.3378 |
| No log | 74.0 | 296 | 2.8790 | 29.8055 |
| No log | 75.0 | 300 | 2.8794 | 29.0100 |
| No log | 76.0 | 304 | 2.8795 | 30.7968 |
| No log | 77.0 | 308 | 2.8790 | 31.5414 |
| No log | 78.0 | 312 | 2.8783 | 31.5060 |
| No log | 79.0 | 316 | 2.8775 | 31.4376 |
| No log | 80.0 | 320 | 2.8766 | 31.6005 |
| No log | 81.0 | 324 | 2.8767 | 31.3697 |
| No log | 82.0 | 328 | 2.8769 | 31.6108 |
| No log | 83.0 | 332 | 2.8770 | 31.4214 |
| No log | 84.0 | 336 | 2.8772 | 31.6039 |
| No log | 85.0 | 340 | 2.8776 | 32.0254 |
| No log | 86.0 | 344 | 2.8779 | 31.4024 |
| No log | 87.0 | 348 | 2.8783 | 32.0279 |
| No log | 88.0 | 352 | 2.8786 | 31.8914 |
| No log | 89.0 | 356 | 2.8788 | 31.6500 |
| No log | 90.0 | 360 | 2.8791 | 31.7698 |
| No log | 91.0 | 364 | 2.8793 | 31.6137 |
| No log | 92.0 | 368 | 2.8793 | 31.8244 |
| No log | 93.0 | 372 | 2.8790 | 31.5626 |
| No log | 94.0 | 376 | 2.8786 | 31.3743 |
| No log | 95.0 | 380 | 2.8785 | 31.4160 |
| No log | 96.0 | 384 | 2.8784 | 31.6682 |
| No log | 97.0 | 388 | 2.8782 | 31.8335 |
| No log | 98.0 | 392 | 2.8782 | 31.7143 |
| No log | 99.0 | 396 | 2.8782 | 31.7143 |
| No log | 100.0 | 400 | 2.8782 | 31.7143 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
erikanesse/test-trainer-gbb-3 | fd2648e4ae31389a3331bb29a115d1ad71309e31 | 2022-07-19T21:12:49.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | erikanesse | null | erikanesse/test-trainer-gbb-3 | 22 | 1 | transformers | 8,117 | ---
tags:
- generated_from_trainer
model-index:
- name: test-trainer-gbb-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer-gbb-3
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
benoitb/nkbert | 5037b8c2586e15e58345ae21fb7983597c291de1 | 2022-07-21T03:42:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | benoitb | null | benoitb/nkbert | 22 | null | transformers | 8,118 | ---
license: mit
---
## NKBert
A BERT model finetuned from a <a href="https://github.com/SKTBrain/KoBERT">KoBERT</a> base on a dataset of North Korean data.
|
eclat12450/fine-tuned-NSPKCbert-12 | 215c888ee997ce2abccb99ef378de9b57d81186b | 2022-07-27T06:22:04.000Z | [
"pytorch",
"bert",
"next-sentence-prediction",
"transformers"
] | null | false | eclat12450 | null | eclat12450/fine-tuned-NSPKCbert-12 | 22 | null | transformers | 8,119 | Entry not found |
jungjongho/wav2vec2-large-xlsr-korean-demo-colab | 14fb4f4e8ef1ee1b849e9897a3eaf7e5300a41c6 | 2022-07-28T22:43:35.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jungjongho | null | jungjongho/wav2vec2-large-xlsr-korean-demo-colab | 22 | null | transformers | 8,120 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-korean-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-korean-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4534
- Wer: 0.3272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 17.4809 | 0.65 | 400 | 4.6145 | 1.0 |
| 4.4863 | 1.29 | 800 | 4.3819 | 1.0 |
| 4.2921 | 1.94 | 1200 | 4.1163 | 0.9970 |
| 2.7971 | 2.59 | 1600 | 1.5376 | 0.8379 |
| 1.5061 | 3.24 | 2000 | 1.0354 | 0.7299 |
| 1.1123 | 3.88 | 2400 | 0.7909 | 0.6418 |
| 0.9037 | 4.53 | 2800 | 0.6345 | 0.5698 |
| 0.779 | 5.18 | 3200 | 0.5909 | 0.5571 |
| 0.6834 | 5.83 | 3600 | 0.5339 | 0.5063 |
| 0.6287 | 6.47 | 4000 | 0.5326 | 0.4954 |
| 0.5518 | 7.12 | 4400 | 0.4930 | 0.4607 |
| 0.5315 | 7.77 | 4800 | 0.4577 | 0.4451 |
| 0.4867 | 8.41 | 5200 | 0.4547 | 0.4382 |
| 0.4543 | 9.06 | 5600 | 0.4581 | 0.4371 |
| 0.4089 | 9.71 | 6000 | 0.4387 | 0.4258 |
| 0.3893 | 10.36 | 6400 | 0.4300 | 0.4100 |
| 0.3751 | 11.0 | 6800 | 0.4265 | 0.4137 |
| 0.3333 | 11.65 | 7200 | 0.4294 | 0.4011 |
| 0.3039 | 12.3 | 7600 | 0.4187 | 0.3912 |
| 0.2974 | 12.94 | 8000 | 0.4079 | 0.3805 |
| 0.2658 | 13.59 | 8400 | 0.4273 | 0.3864 |
| 0.2676 | 14.24 | 8800 | 0.4103 | 0.3734 |
| 0.2466 | 14.89 | 9200 | 0.4122 | 0.3701 |
| 0.2282 | 15.53 | 9600 | 0.4176 | 0.3650 |
| 0.2186 | 16.18 | 10000 | 0.4199 | 0.3632 |
| 0.2132 | 16.83 | 10400 | 0.4159 | 0.3671 |
| 0.1962 | 17.48 | 10800 | 0.4321 | 0.3641 |
| 0.1922 | 18.12 | 11200 | 0.4300 | 0.3535 |
| 0.1827 | 18.77 | 11600 | 0.4244 | 0.3596 |
| 0.1709 | 19.42 | 12000 | 0.4191 | 0.3518 |
| 0.157 | 20.06 | 12400 | 0.4308 | 0.3496 |
| 0.147 | 20.71 | 12800 | 0.4360 | 0.3457 |
| 0.1502 | 21.36 | 13200 | 0.4329 | 0.3431 |
| 0.1448 | 22.01 | 13600 | 0.4334 | 0.3432 |
| 0.1407 | 22.65 | 14000 | 0.4392 | 0.3440 |
| 0.1342 | 23.3 | 14400 | 0.4418 | 0.3399 |
| 0.1325 | 23.95 | 14800 | 0.4360 | 0.3383 |
| 0.1183 | 24.6 | 15200 | 0.4521 | 0.3359 |
| 0.1174 | 25.24 | 15600 | 0.4426 | 0.3322 |
| 0.1137 | 25.89 | 16000 | 0.4438 | 0.3356 |
| 0.1129 | 26.54 | 16400 | 0.4547 | 0.3347 |
| 0.1077 | 27.18 | 16800 | 0.4482 | 0.3300 |
| 0.0999 | 27.83 | 17200 | 0.4491 | 0.3281 |
| 0.0978 | 28.48 | 17600 | 0.4533 | 0.3281 |
| 0.0997 | 29.13 | 18000 | 0.4542 | 0.3283 |
| 0.0908 | 29.77 | 18400 | 0.4534 | 0.3272 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
ParkSaeroyi/distilroberta-base-finetuned-wikitext2 | 60e608ccb626feb9e47ed089be3a387d079749cc | 2022-07-29T08:10:16.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | ParkSaeroyi | null | ParkSaeroyi/distilroberta-base-finetuned-wikitext2 | 22 | null | transformers | 8,121 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.3687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 8.8622 |
| No log | 2.0 | 12 | 8.4576 |
| No log | 3.0 | 18 | 8.4412 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Anon25/DialoGPT-Medium-BaymaxBot | 7505e67fb3bcb7da4c00043f45d3af5fc8e45db7 | 2022-07-29T14:58:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Anon25 | null | Anon25/DialoGPT-Medium-BaymaxBot | 22 | null | transformers | 8,122 | ---
tags:
- conversational
---
# DialoGPT BaymaxBot |
A-bhimany-u08/bert-base-cased-qqp | b5e8848d0676e40a2b8a2f4b0a3a3073e581d3e6 | 2021-05-23T06:58:51.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:qqp",
"transformers"
] | text-classification | false | A-bhimany-u08 | null | A-bhimany-u08/bert-base-cased-qqp | 21 | null | transformers | 8,123 |
---
inference: False
datasets:
- qqp
---
bert-base-cased model trained on quora question pair dataset. The task requires to predict whether the two given sentences (or questions) are `not_duplicate` (label 0) or `duplicate` (label 1). The model achieves 89% evaluation accuracy
|
Aleksandar/bert-srb-ner-setimes | a06811745221ba5ede99506829f2b28bcc6eac66 | 2021-09-22T12:19:23.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | token-classification | false | Aleksandar | null | Aleksandar/bert-srb-ner-setimes | 21 | null | transformers | 8,124 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: bert-srb-ner-setimes
results:
- task:
name: Token Classification
type: token-classification
metric:
name: Accuracy
type: accuracy
value: 0.9645112274185379
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-srb-ner-setimes
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1955
- Precision: 0.8229
- Recall: 0.8465
- F1: 0.8345
- Accuracy: 0.9645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 104 | 0.2281 | 0.6589 | 0.7001 | 0.6789 | 0.9350 |
| No log | 2.0 | 208 | 0.1833 | 0.7105 | 0.7694 | 0.7388 | 0.9470 |
| No log | 3.0 | 312 | 0.1573 | 0.7461 | 0.7778 | 0.7616 | 0.9525 |
| No log | 4.0 | 416 | 0.1489 | 0.7665 | 0.8091 | 0.7872 | 0.9557 |
| 0.1898 | 5.0 | 520 | 0.1445 | 0.7881 | 0.8327 | 0.8098 | 0.9587 |
| 0.1898 | 6.0 | 624 | 0.1473 | 0.7913 | 0.8316 | 0.8109 | 0.9601 |
| 0.1898 | 7.0 | 728 | 0.1558 | 0.8101 | 0.8347 | 0.8222 | 0.9620 |
| 0.1898 | 8.0 | 832 | 0.1616 | 0.8026 | 0.8302 | 0.8162 | 0.9612 |
| 0.1898 | 9.0 | 936 | 0.1716 | 0.8127 | 0.8409 | 0.8266 | 0.9631 |
| 0.0393 | 10.0 | 1040 | 0.1751 | 0.8140 | 0.8369 | 0.8253 | 0.9628 |
| 0.0393 | 11.0 | 1144 | 0.1775 | 0.8096 | 0.8420 | 0.8255 | 0.9626 |
| 0.0393 | 12.0 | 1248 | 0.1763 | 0.8161 | 0.8386 | 0.8272 | 0.9636 |
| 0.0393 | 13.0 | 1352 | 0.1949 | 0.8259 | 0.8400 | 0.8329 | 0.9634 |
| 0.0393 | 14.0 | 1456 | 0.1842 | 0.8205 | 0.8420 | 0.8311 | 0.9642 |
| 0.0111 | 15.0 | 1560 | 0.1862 | 0.8160 | 0.8493 | 0.8323 | 0.9646 |
| 0.0111 | 16.0 | 1664 | 0.1989 | 0.8176 | 0.8367 | 0.8270 | 0.9627 |
| 0.0111 | 17.0 | 1768 | 0.1945 | 0.8246 | 0.8409 | 0.8327 | 0.9638 |
| 0.0111 | 18.0 | 1872 | 0.1997 | 0.8270 | 0.8426 | 0.8347 | 0.9634 |
| 0.0111 | 19.0 | 1976 | 0.1917 | 0.8258 | 0.8491 | 0.8373 | 0.9651 |
| 0.0051 | 20.0 | 2080 | 0.1955 | 0.8229 | 0.8465 | 0.8345 | 0.9645 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
CenIA/bert-base-spanish-wwm-uncased-finetuned-pos | 1b3e20ce7cd4507a1b9b52f47dc2f901b8f60536 | 2021-12-18T00:34:15.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/bert-base-spanish-wwm-uncased-finetuned-pos | 21 | null | transformers | 8,125 | Entry not found |
Contrastive-Tension/RoBerta-Large-CT-STSb | 43813afe01041a34f21aff389f44ae7b5a65feec | 2021-05-20T11:41:18.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | Contrastive-Tension | null | Contrastive-Tension/RoBerta-Large-CT-STSb | 21 | null | transformers | 8,126 | Entry not found |
DanL/scientific-challenges-and-directions | d86bd50d2b94e0b592b752b2b1c1674ddea5f65d | 2022-01-19T12:47:22.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:DanL/scientific-challenges-and-directions-dataset",
"arxiv:2108.13751",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | DanL | null | DanL/scientific-challenges-and-directions | 21 | null | transformers | 8,127 | ---
tags:
- generated_from_trainer
- text-classification
language:
- en
datasets:
- DanL/scientific-challenges-and-directions-dataset
widget:
- text: "severe atypical cases of pneumonia emerged and quickly spread worldwide."
example_title: "challenge"
- text: "we speculate that studying IL-6 will be beneficial."
example_title: "direction"
- text: "in future studies, both PRRs should be tested as the cause for multiple deaths."
example_title: "both"
- text: "IbMADS1-transformed potatoes exhibited tuber morphogenesis in the fibrous roots."
example_title: "neither"
metrics:
- precision
- recall
- f1
model-index:
- name: scientific-challenges-and-directions
results: []
---
# scientific-challenges-and-directions
We present a novel resource to help scientists and medical professionals discover challenges and potential directions across scientific literature, focusing on a broad corpus pertaining to the COVID-19 pandemic and related historical research. At a high level, the _challenges_ and _directions_ are defined as follows:
* **Challenge**: A sentence mentioning a problem, difficulty, flaw, limitation, failure, lack of clarity, or knowledge gap.
* **Research direction**: A sentence mentioning suggestions or needs for further research, hypotheses, speculations, indications or hints that an issue is worthy of exploration.
* This model here is described in our paper: [A Search Engine for Discovery of Scientific Challenges and Directions](https://arxiv.org/abs/2108.13751) (though we've upgraded the infrastructure since the paper was released - there are slight differences in the results).
* Our dataset can be found [here](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset).
* Please cite our paper if you use our datasets or models in your project. See the [BibTeX](#citation).
* Feel free to [email us](#contact-us).
* Also, check out [our search engine](https://challenges.apps.allenai.org/), as an example application.
## Model description
This model is a fine-tuned version of [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [scientific-challenges-and-directions-dataset](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset), designed for multi-label text classification.
## Training and evaluation data
The scientific-challenges-and-directions model is trained based on a dataset that is a collection of 2894 sentences and their surrounding contexts, from 1786 full-text papers in the CORD-19 corpus, labeled for classification of challenges and directions by expert annotators with biomedical and bioNLP backgrounds. For full details on the train/test/split of the data see section 3.1 in our [paper](https://arxiv.org/abs/2108.13751)
## Example notebook
We include an example notebook that uses the model for inference in our [repo](https://github.com/Dan-La/scientific-challenges-and-directions). See `Inference_Notebook.ipynb`.
A training notebook is also included.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning rate: 2e-05
- train batch size: 8
- eval batch size: 4
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr scheduler type: linear
- lr scheduler warmup steps: 500
- num epochs: 30
### Training results
The achieves the following results on the test set:
- Precision Challenge: 0.768719
- Recall Challenge: 0.780405
- F1 Challenge: 0.774518
- Precision Direction: 0.758112
- Recall Direction: 0.774096
- F1 Direction: 0.766021
- Precision (micro avg. on both labels): 0.764894
- Recall (micro avg. on both labels): 0.778139
- F1 (micro avg. on both labels): 0.771459
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
## Citation
If using our dataset and models, please cite:
```
@misc{lahav2021search,
title={A Search Engine for Discovery of Scientific Challenges and Directions},
author={Dan Lahav and Jon Saad Falcon and Bailey Kuehl and Sophie Johnson and Sravanthi Parasa and Noam Shomron and Duen Horng Chau and Diyi Yang and Eric Horvitz and Daniel S. Weld and Tom Hope},
year={2021},
eprint={2108.13751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact us
Please don't hesitate to reach out.
**Email:** `[email protected]`,`[email protected]`.
|
EMBEDDIA/sloberta-tweetsentiment | 2cbfdc5fb6cdd8b5400eb33153c68ac3072ab726 | 2021-07-09T14:27:28.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
] | text-classification | false | EMBEDDIA | null | EMBEDDIA/sloberta-tweetsentiment | 21 | null | transformers | 8,128 | Entry not found |
EasthShin/Klue-CommonSense-model | 4f01be2e2b74f65ba541d9a75839008e6fd98b59 | 2021-07-12T10:01:36.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | EasthShin | null | EasthShin/Klue-CommonSense-model | 21 | null | transformers | 8,129 |
#### Klue-bert base for Common Sense QA
#### Klue-CommonSense-model DEMO: [Ainize DEMO](https://main-klue-common-sense-qa-east-h-shin.endpoint.ainize.ai/)
#### Klue-CommonSense-model API: [Ainize API](https://ainize.ai/EastHShin/Klue-CommonSense_QA?branch=main)
### Overview
**Language model**: klue/bert-base
<br>
**Language**: Korean
<br>
**Downstream-task**: Extractive QA
<br>
**Training data**: Common sense Data from [Mindslab](https://mindslab.ai:8080/kr/company)
<br>
**Eval data**: Common sense Data from [Mindslab](https://mindslab.ai:8080/kr/company)
<br>
**Code**: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/EastHShin/Klue-CommonSense-workspace)
<br>
### Usage
### In Transformers
```
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EasthShin/Klue-CommonSense-model")
model = AutoModelForQuestionAnswering.from_pretrained("EasthShin/Klue-CommonSense-model")
context = "your context"
question = "your question"
encodings = tokenizer(context, question, max_length=512, truncation=True,
padding="max_length", return_token_type_ids=False)
encodings = {key: torch.tensor([val]) for key, val in encodings.items()}
input_ids = encodings["input_ids"]
attention_mask = encodings["attention_mask"]
pred = model(input_ids, attention_mask=attention_mask)
start_logits, end_logits = pred.start_logits, pred.end_logits
token_start_index, token_end_index = start_logits.argmax(dim=-1), end_logits.argmax(dim=-1)
pred_ids = input_ids[0][token_start_index: token_end_index + 1]
prediction = tokenizer.decode(pred_ids)
``` |
GKLMIP/bert-khmer-base-uncased-tokenized | 8654291edec0db4592eb4b0db0eb34b7eccfc3fb | 2021-07-31T03:07:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | GKLMIP | null | GKLMIP/bert-khmer-base-uncased-tokenized | 21 | null | transformers | 8,130 | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` |
GKLMIP/bert-myanmar-small-uncased | ed42175fd89ee3972cf4b4a706d9f463f23baf35 | 2021-10-11T04:59:22.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | GKLMIP | null | GKLMIP/bert-myanmar-small-uncased | 21 | null | transformers | 8,131 | The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Huang, Xiuwen
and Cai, Xiaonan
and Lin, Nankai",
title="Pre-trained Models and Evaluation Data for the Myanmar Language",
booktitle="The 28th International Conference on Neural Information Processing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` |
GPL/msmarco-distilbert-margin-mse | 3fbae3e91e291b2472e58a9fff859a5e564f00a1 | 2021-12-15T04:10:19.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"arxiv:2112.07577",
"transformers"
] | feature-extraction | false | GPL | null | GPL/msmarco-distilbert-margin-mse | 21 | 1 | transformers | 8,132 | This is the zero-shot baseline model in the paper ["GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval"](https://arxiv.org/abs/2112.07577)
The training setup:
1. Start from `distilbert-base-uncased`;
2. Mine 50 hard negatives for each query on MS MARCO with `sentence-transformers/msmarco-distilbert-base-v3` and `sentence-transformers/msmarco-MiniLM-L-6-v3`;
3. Do Margin-MSE training on the tuples (including queries, gold relevant, and hard negatives) with the teacher model `cross-encoder/ms-marco-MiniLM-L-6-v2` for 70K steps with batch size 75, max. sequence-length 350.
|
Hate-speech-CNERG/dehatebert-mono-indonesian | 08693d6cc64f7e7b3019b2a3abe3b1a9c8ca74c2 | 2021-05-18T20:33:24.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2004.06465",
"transformers"
] | text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/dehatebert-mono-indonesian | 21 | null | transformers | 8,133 | This model is used detecting **hatespeech** in **Indonesian language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.844494 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Helsinki-NLP/opus-mt-ceb-fr | 90d773c1774988007f9fd8f44477de8d5ee310b6 | 2021-09-09T21:28:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ceb",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ceb-fr | 21 | null | transformers | 8,134 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ceb-fr
* source languages: ceb
* target languages: fr
* OPUS readme: [ceb-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ceb-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ceb.fr | 30.0 | 0.491 |
|
Helsinki-NLP/opus-mt-en-ha | 36027da91d68364e34454ce37ce60d0a43671430 | 2021-09-09T21:35:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ha",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ha | 21 | null | transformers | 8,135 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ha
* source languages: en
* target languages: ha
* OPUS readme: [en-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ha/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ha | 34.1 | 0.544 |
| Tatoeba.en.ha | 17.6 | 0.498 |
|
Helsinki-NLP/opus-mt-en-ig | 32e340a06fdff2e071d306a127d91b5fbb31c359 | 2021-09-09T21:36:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ig",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ig | 21 | null | transformers | 8,136 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ig
* source languages: en
* target languages: ig
* OPUS readme: [en-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ig/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ig | 39.5 | 0.546 |
| Tatoeba.en.ig | 3.8 | 0.297 |
|
Helsinki-NLP/opus-mt-es-swc | a75200fce67b931b7ec153baa31b9f56755429f5 | 2021-09-09T21:44:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"swc",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-swc | 21 | null | transformers | 8,137 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-swc
* source languages: es
* target languages: swc
* OPUS readme: [es-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-swc/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-swc/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-swc/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-swc/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.swc | 26.0 | 0.490 |
|
Helsinki-NLP/opus-mt-gil-en | c9d7eff5c31aff094d44707990b24e11358b7dfd | 2021-09-09T21:59:03.000Z | [
"pytorch",
"marian",
"text2text-generation",
"gil",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gil-en | 21 | null | transformers | 8,138 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-gil-en
* source languages: gil
* target languages: en
* OPUS readme: [gil-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.en | 36.0 | 0.522 |
|
Helsinki-NLP/opus-mt-lus-en | f69813f841d2399ba35b514a9377a64aff188fc6 | 2021-09-10T13:56:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lus",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lus-en | 21 | null | transformers | 8,139 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lus-en
* source languages: lus
* target languages: en
* OPUS readme: [lus-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lus.en | 37.0 | 0.534 |
|
Helsinki-NLP/opus-mt-nyk-en | 635c9eeb90b4d5fb0674da39f756b46981bbc195 | 2021-09-10T13:59:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"nyk",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-nyk-en | 21 | null | transformers | 8,140 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nyk-en
* source languages: nyk
* target languages: en
* OPUS readme: [nyk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nyk-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nyk.en | 27.3 | 0.423 |
|
Helsinki-NLP/opus-mt-pon-en | f2e18a245014af64478edabce9c590c6ef049919 | 2021-09-10T14:01:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pon",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pon-en | 21 | null | transformers | 8,141 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pon-en
* source languages: pon
* target languages: en
* OPUS readme: [pon-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.en | 34.1 | 0.489 |
|
Helsinki-NLP/opus-mt-sm-en | 75d732a3c9dcc01e3218d965fe0eda4a972775d3 | 2021-09-10T14:03:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sm",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sm-en | 21 | null | transformers | 8,142 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sm-en
* source languages: sm
* target languages: en
* OPUS readme: [sm-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sm-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sm-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sm-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sm-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sm.en | 36.1 | 0.520 |
|
Helsinki-NLP/opus-mt-tiv-en | 6ec75c7fab0b64d880ab2370c6b672c4208e271d | 2021-09-11T10:48:08.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tiv",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tiv-en | 21 | null | transformers | 8,143 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tiv-en
* source languages: tiv
* target languages: en
* OPUS readme: [tiv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tiv.en | 31.5 | 0.473 |
|
Helsinki-NLP/opus-mt-war-es | 6a6f9fb2b0a5db968aa332d1924f6573889f610d | 2021-09-11T10:51:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"war",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-war-es | 21 | null | transformers | 8,144 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-war-es
* source languages: war
* target languages: es
* OPUS readme: [war-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/war-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/war-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.war.es | 28.7 | 0.470 |
|
Jitin/romanized-malayalam | 1ce63a6321b546686dfebfce8f70c01adbd5be0c | 2021-05-20T11:58:42.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Jitin | null | Jitin/romanized-malayalam | 21 | null | transformers | 8,145 | Entry not found |
KoichiYasuoka/roberta-large-japanese-luw-upos | 79973f2afb55e1a6b6ca01a745ba716ba74f4cec | 2022-05-24T06:27:45.000Z | [
"pytorch",
"roberta",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-large-japanese-luw-upos | 21 | null | transformers | 8,146 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# roberta-large-japanese-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-large-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-japanese-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-large-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
LanceaKing/spkrec-ecapa-cnceleb | 014d1d63fdbccd155fe30bce8459d33fea81290c | 2022-01-08T09:27:18.000Z | [
"zh",
"dataset:cnceleb",
"arxiv:2106.04624",
"speechbrain",
"embeddings",
"Speaker",
"Verification",
"Identification",
"pytorch",
"ECAPA",
"TDNN",
"license:apache-2.0"
] | null | false | LanceaKing | null | LanceaKing/spkrec-ecapa-cnceleb | 21 | 1 | speechbrain | 8,147 | ---
language: "zh"
thumbnail:
tags:
- speechbrain
- embeddings
- Speaker
- Verification
- Identification
- pytorch
- ECAPA
- TDNN
license: "apache-2.0"
datasets:
- cnceleb
metrics:
- EER
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Speaker Verification with ECAPA-TDNN embeddings on cnceleb
This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain.
The system can be used to extract speaker embeddings as well.
It is trained on cnceleb 1+ cnceleb2 training data.
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The model performance on cnceleb1-test set(Cleaned) is:
| Release | EER(%) | minDCF |
|:-------------:|:--------------:|:--------------:|
## Pipeline description
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Compute your speaker embeddings
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
classifier = EncoderClassifier.from_hparams(source="LanceaKing/spkrec-ecapa-cnceleb")
signal, fs =torchaudio.load('samples/audio_samples/example1.wav')
embeddings = classifier.encode_batch(signal)
```
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*.
### Perform Speaker Verification
```python
from speechbrain.pretrained import SpeakerRecognition
verification = SpeakerRecognition.from_hparams(source="LanceaKing/spkrec-ecapa-cnceleb", savedir="pretrained_models/spkrec-ecapa-cnceleb")
score, prediction = verification.verify_files("speechbrain/spkrec-ecapa-cnceleb/example1.wav", "speechbrain/spkrec-ecapa-cnceleb/example2.flac")
```
The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/LanceaKing/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/CNCeleb/SpeakerRec
python train_speaker_embeddings.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ECAPA-TDNN
```
@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
author = {Brecht Desplanques and
Jenthe Thienpondt and
Kris Demuynck},
editor = {Helen Meng and
Bo Xu and
Thomas Fang Zheng},
title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
in {TDNN} Based Speaker Verification},
booktitle = {Interspeech 2020},
pages = {3830--3834},
publisher = {{ISCA}},
year = {2020},
}
```
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and Fran莽ois Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/ |
LilaBoualili/bert-sim-pair | e03568203cd506372323431ac462711969082076 | 2021-05-18T21:26:27.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | LilaBoualili | null | LilaBoualili/bert-sim-pair | 21 | null | transformers | 8,148 | At its core it uses an BERT-Base model (bert-base-uncased) fine-tuned on the MS MARCO passage classification task using the Sim-Pair marking strategy that highlights exact term matches between the query and the passage via marker tokens (#). It can be loaded using the TF/AutoModelForSequenceClassification classes.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking.
|
NoLawz/DialoGPT-medium-hagrid | c8b2bdebdc4cc87859abeb56336afbd909720f63 | 2021-08-27T04:32:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | NoLawz | null | NoLawz/DialoGPT-medium-hagrid | 21 | null | transformers | 8,149 | ---
tags:
- conversational
---
# Hagrid DialoGPT medium model |
PereLluis13/wav2vec2-large-xlsr-53-greek | 1038521bc2c8994cb6778ff514fec91c388243f8 | 2021-07-05T16:44:41.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"el",
"dataset:common_voice",
"dataset:CSS10",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | PereLluis13 | null | PereLluis13/wav2vec2-large-xlsr-53-greek | 21 | null | transformers | 8,150 | ---
language: el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
datasets:
- common_voice #TODO: remove if you did not use the common voice dataset
- CSS10
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Greek XLSR Wav2Vec2 Large 53 - CV + CSS10 #TODO: replace {human_readable_name} with a name of your model as it should appear on the leaderboard. It could be something like `Elgeish XLSR Wav2Vec2 Large 53`
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
type: common_voice
args: el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
metrics:
- name: Test WER
type: wer
value: 20.89 #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value
---
# Wav2Vec2-Large-XLSR-53-greek
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on greek using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10) datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "el", split="test")
processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the greek test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "el", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 20.89 %
## Training
The Common Voice `train`, `validation`, and CSS10 datasets were used for training, added as `extra` split to the dataset. The sampling rate and format of the CSS10 files is different, hence the function `speech_file_to_array_fn` was changed to:
```
def speech_file_to_array_fn(batch):
try:
speech_array, sampling_rate = sf.read(batch["path"] + ".wav")
except:
speech_array, sampling_rate = librosa.load(batch["path"], sr = 16000, res_type='zero_order_hold')
sf.write(batch["path"] + ".wav", speech_array, sampling_rate, subtype='PCM_24')
batch["speech"] = speech_array
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["text"]
return batch
```
As suggested by [Florian Zimmermeister](https://github.com/flozi00).
The script used for training can be found in [run_common_voice.py](examples/research_projects/wav2vec2/run_common_voice.py), still pending of PR. The only changes are to `speech_file_to_array_fn`. Batch size was kept at 32 (using `gradient_accumulation_steps`) using one of the [OVH](https://www.ovh.com/) machines, with a V100 GPU (thank you very much [OVH](https://www.ovh.com/)). The model trained for 40 epochs, the first 20 with the `train+validation` splits, and then `extra` split was added with the data from CSS10 at the 20th epoch. |
Pyjay/gpt2-medium-dutch-finetuned-text-generation | 320d8904c16b550e03a873be6709796643c8c5d2 | 2021-07-23T09:44:31.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer"
] | text-generation | false | Pyjay | null | Pyjay/gpt2-medium-dutch-finetuned-text-generation | 21 | null | transformers | 8,151 | ---
tags:
- generated_from_trainer
model_index:
- name: gpt2-medium-dutch-finetuned-text-generation
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-dutch-finetuned-text-generation
This model is a fine-tuned version of [GroNLP/gpt2-medium-dutch-embeddings](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 394 | 4.0144 |
| 3.3633 | 2.0 | 788 | 3.9379 |
| 2.7108 | 3.0 | 1182 | 3.9268 |
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Pyjay/sentence-transformers-multilingual-snli-v2-500k | db1e3450586788d37d6d0df60a0fd5f72d554aa3 | 2021-08-05T21:42:55.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | Pyjay | null | Pyjay/sentence-transformers-multilingual-snli-v2-500k | 21 | 1 | sentence-transformers | 8,152 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Pyjay/sentence-transformers-multilingual-snli-v2-500k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Pyjay/sentence-transformers-multilingual-snli-v2-500k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Pyjay/sentence-transformers-multilingual-snli-v2-500k')
model = AutoModel.from_pretrained('Pyjay/sentence-transformers-multilingual-snli-v2-500k')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Pyjay/sentence-transformers-multilingual-snli-v2-500k)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15604 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 72,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
SEBIS/code_trans_t5_large_commit_generation_multitask | 90b07932ba1f058f61a452044f07181d179f3dcc | 2021-06-23T08:09:26.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_commit_generation_multitask | 21 | null | transformers | 8,153 | ---
tags:
- summarization
widget:
- text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
---
# CodeTrans model for git commit message generation
Pretrained model on git commit using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate git commit message using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/commit%20generation/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 39.61 |
| CodeTrans-ST-Base | 38.67 |
| CodeTrans-TF-Small | 44.22 |
| CodeTrans-TF-Base | 44.17 |
| CodeTrans-TF-Large | **44.41** |
| CodeTrans-MT-Small | 36.17 |
| CodeTrans-MT-Base | 39.25 |
| CodeTrans-MT-Large | 41.18 |
| CodeTrans-MT-TF-Small | 43.96 |
| CodeTrans-MT-TF-Base | 44.19 |
| CodeTrans-MT-TF-Large | 44.34 |
| State of the art | 32.81 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
Theivaprakasham/sentence-transformers-paraphrase-MiniLM-L6-v2-twitter_sentiment | 2c704c61b29e85390dd28858371bf95d8af4306e | 2021-12-06T06:18:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Theivaprakasham | null | Theivaprakasham/sentence-transformers-paraphrase-MiniLM-L6-v2-twitter_sentiment | 21 | null | transformers | 8,154 | Entry not found |
TuhinColumbia/germanpoetrymany | 339c1b86ce4524cc7d61743ede33ba9c6bca47ee | 2021-09-04T09:37:02.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | TuhinColumbia | null | TuhinColumbia/germanpoetrymany | 21 | null | transformers | 8,155 | Entry not found |
abdelkader/distilbert-base-uncased-finetuned-clinc | 43cb619030ec12a7c61727fb0f1300c011eb2d4c | 2022-01-20T04:59:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | abdelkader | null | abdelkader/distilbert-base-uncased-finetuned-clinc | 21 | null | transformers | 8,156 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7713
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2831 | 0.7426 |
| 3.785 | 2.0 | 636 | 1.8739 | 0.8335 |
| 3.785 | 3.0 | 954 | 1.1525 | 0.8926 |
| 1.6894 | 4.0 | 1272 | 0.8569 | 0.91 |
| 0.897 | 5.0 | 1590 | 0.7713 | 0.9174 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
alexbrandsen/ArcheoBERTje-NER | 7139d3191d64934a64e07b2083c7f00adc80a676 | 2021-05-18T23:21:58.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | alexbrandsen | null | alexbrandsen/ArcheoBERTje-NER | 21 | 1 | transformers | 8,157 | # ArcheoBERTje-NER
A Dutch BERT model for Named Entity Recognition in the Archaeology domain
This is the [ArcheoBERTje](https://huggingface.co/alexbrandsen/ArcheoBERTje) model finetuned for NER, targeting the following entities:
- Time periods
- Places
- Artefacts
- Contexts
- Materials
- Species
|
allenai/dsp_roberta_base_tapt_chemprot_4169 | b8b106a3c5d0b7fd876320ddd4f801c205782f1c | 2021-05-20T13:23:16.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
] | null | false | allenai | null | allenai/dsp_roberta_base_tapt_chemprot_4169 | 21 | null | transformers | 8,158 | Entry not found |
aodiniz/bert_uncased_L-2_H-512_A-8_cord19-200616_squad2_covid-qna | 629f45e60d677baff78e60affe105a553414c073 | 2021-05-18T23:49:46.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-2_H-512_A-8_cord19-200616_squad2_covid-qna | 21 | null | transformers | 8,159 | Entry not found |
aphuongle95/xlnet_effect_partial_new | f2c28acc4a763fb7af0150a2933fbe859e1fdec5 | 2020-09-23T16:40:15.000Z | [
"pytorch",
"xlnet",
"text-generation",
"transformers"
] | text-generation | false | aphuongle95 | null | aphuongle95/xlnet_effect_partial_new | 21 | null | transformers | 8,160 | Entry not found |
benjaminbeilharz/dialoGPT-small-empatheticdialogues-generation | 4a8d404f9b35c1d92a511c5424d9a0243dafaeb1 | 2022-01-27T11:07:49.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"en",
"dataset:empathetic dialogues",
"transformers",
"conversational",
"license:mit"
] | conversational | false | benjaminbeilharz | null | benjaminbeilharz/dialoGPT-small-empatheticdialogues-generation | 21 | null | transformers | 8,161 | ---
language:
- en
datasets:
- empathetic dialogues
tags:
- conversational
- pytorch
- transformers
- gpt2
license: mit
---
Still figuring out to properly write model cards.
WIP. |
bgoel4132/tweet-disaster-classifier | db2a76702f811bfe3c016d1f29c205b842394a33 | 2021-11-02T09:55:27.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:bgoel4132/autonlp-data-tweet-disaster-classifier",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | bgoel4132 | null | bgoel4132/tweet-disaster-classifier | 21 | null | transformers | 8,162 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bgoel4132/autonlp-data-tweet-disaster-classifier
co2_eq_emissions: 27.22397099134103
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 28716412
- CO2 Emissions (in grams): 27.22397099134103
## Validation Metrics
- Loss: 0.4146720767021179
- Accuracy: 0.8066924731182795
- Macro F1: 0.7835463282531184
- Micro F1: 0.8066924731182795
- Weighted F1: 0.7974252447208724
- Macro Precision: 0.8183917344767431
- Micro Precision: 0.8066924731182795
- Weighted Precision: 0.8005510296861892
- Macro Recall: 0.7679676081852519
- Micro Recall: 0.8066924731182795
- Weighted Recall: 0.8066924731182795
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bgoel4132/autonlp-tweet-disaster-classifier-28716412
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bgoel4132/autonlp-tweet-disaster-classifier-28716412", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bgoel4132/autonlp-tweet-disaster-classifier-28716412", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
castorini/duot5-3b-med-msmarco | 553eafaab45ee8b980baa4c9ca2df4eb044f8235 | 2021-05-28T12:02:55.000Z | [
"pytorch",
"t5",
"feature-extraction",
"arxiv:2101.05667",
"transformers"
] | feature-extraction | false | castorini | null | castorini/duot5-3b-med-msmarco | 21 | null | transformers | 8,163 | This model is a T5-3B reranker pre-finetuned on the MS MARCO passage dataset for 10K steps (or 1 epoch) on the pairwise task and then finetuned on MedMARCO (from [Sledge-Z paper](https://www.aclweb.org/anthology/2020.emnlp-main.341.pdf)) for 1K steps on the pairwise task.
For more details on how to use it, check [pygaggle.ai](pygaggle.ai)!
Paper describing the model: [The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models](https://arxiv.org/abs/2101.05667) |
danurahul/alex_gpt3_Doctextfull2 | 0a212546424a9936eacf37501e9a3b8698534b8c | 2021-05-21T15:19:06.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/alex_gpt3_Doctextfull2 | 21 | null | transformers | 8,164 | Entry not found |
dbmdz/flair-historic-ner-onb | 99c1e7122a688aae8a1f45f875207a358bb109d0 | 2021-02-26T15:41:21.000Z | [
"pytorch",
"de",
"flair",
"token-classification",
"sequence-tagger-model",
"license:mit"
] | token-classification | false | dbmdz | null | dbmdz/flair-historic-ner-onb | 21 | null | flair | 8,165 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
widget:
- text: "April Martin Ansclm, K. Gefangen-Auffehers Georg Sausgruber."
license: mit
---
# Towards Robust Named Entity Recognition for Historic German
Based on [our paper](https://www.aclweb.org/anthology/W19-4312/)
we release a new model trained on the ONB dataset.
**Note:** We use BPEmbeddings instead of the combination of
Wikipedia, Common Crawl and character embeddings (as used in the paper),
so save space and training/inferencing time.
# Results
| Dataset \ Run | Run 1 | Run 2 | Run 3 | Avg.
| ------------- | ----- | ----- | --------- | ------------
| Development | 86.69 | 86.13 | **87.18** | 86.67
| Test | 85.27 | 86.05 | 85.75† | 85.69
Paper reported an averaged F1-score of 85.31.
† denotes that this model is selected for upload.
|
dbsamu/electra-small-discriminator-finetuned-ner | 22872a0c99f393a67de341f085453242bad81129 | 2022-01-24T14:27:41.000Z | [
"pytorch",
"tensorboard",
"electra",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | dbsamu | null | dbsamu/electra-small-discriminator-finetuned-ner | 21 | null | transformers | 8,166 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electra-small-discriminator-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: en
metrics:
- name: Precision
type: precision
value: 0.7330965535385425
- name: Recall
type: recall
value: 0.7542632861138681
- name: F1
type: f1
value: 0.7435293071244329
- name: Accuracy
type: accuracy
value: 0.8883011190233978
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-discriminator-finetuned-ner
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3685
- Precision: 0.7331
- Recall: 0.7543
- F1: 0.7435
- Accuracy: 0.8883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5465 | 1.0 | 1250 | 0.4158 | 0.6932 | 0.7201 | 0.7064 | 0.8735 |
| 0.4037 | 2.0 | 2500 | 0.3817 | 0.7191 | 0.7470 | 0.7328 | 0.8828 |
| 0.3606 | 3.0 | 3750 | 0.3685 | 0.7331 | 0.7543 | 0.7435 | 0.8883 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
fergusq/finbert-finnsentiment | 132114fa461bd591d0861cf10d0299f9d227f22d | 2021-09-30T20:41:06.000Z | [
"pytorch",
"bert",
"text-classification",
"fi",
"arxiv:2012.02613",
"transformers"
] | text-classification | false | fergusq | null | fergusq/finbert-finnsentiment | 21 | 1 | transformers | 8,167 | ---
language: fi
---
# FinBERT fine-tuned with the FinnSentiment dataset
This is a FinBERT model fine-tuned with the [FinnSentiment dataset](https://arxiv.org/pdf/2012.02613.pdf).
|
fidukm34/biobert_v1.1_pubmed-finetuned-ner | 43583cebe51e3bcb4a135f83cc5e216e415b6d38 | 2021-09-16T17:09:50.000Z | [
"pytorch",
"bert",
"token-classification",
"dataset:ncbi_disease",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | token-classification | false | fidukm34 | null | fidukm34/biobert_v1.1_pubmed-finetuned-ner | 21 | null | transformers | 8,168 | ---
tags:
- generated_from_trainer
datasets:
- ncbi_disease
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: biobert_v1.1_pubmed-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ncbi_disease
type: ncbi_disease
args: ncbi_disease
metric:
name: Accuracy
type: accuracy
value: 0.9827274990663513
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert_v1.1_pubmed-finetuned-ner
This model is a fine-tuned version of [monologg/biobert_v1.1_pubmed](https://huggingface.co/monologg/biobert_v1.1_pubmed) on the ncbi_disease dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0657
- Precision: 0.8338
- Recall: 0.8933
- F1: 0.8625
- Accuracy: 0.9827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 340 | 0.0612 | 0.8268 | 0.85 | 0.8382 | 0.9806 |
| 0.0987 | 2.0 | 680 | 0.0604 | 0.8397 | 0.8848 | 0.8616 | 0.9829 |
| 0.0272 | 3.0 | 1020 | 0.0657 | 0.8338 | 0.8933 | 0.8625 | 0.9827 |
### Framework versions
- Transformers 4.8.1
- Pytorch 1.9.0
- Datasets 1.6.2
- Tokenizers 0.10.3
|
gabrieljg/wav2vec2-common_voice-es-demo | ba9f8bb7d9ceb676c5939e817e2e3f45533327ac | 2022-01-30T21:38:32.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gabrieljg | null | gabrieljg/wav2vec2-common_voice-es-demo | 21 | null | transformers | 8,169 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-es-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-es-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - ES dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1788
- Wer: 1.0239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.02 | 100 | 6.6465 | 1.0 |
| No log | 0.04 | 200 | 3.0150 | 1.0 |
| No log | 0.05 | 300 | 2.8622 | 1.0003 |
| No log | 0.07 | 400 | 0.9506 | 0.9771 |
| 5.1598 | 0.09 | 500 | 0.4883 | 1.0009 |
| 5.1598 | 0.11 | 600 | 0.3893 | 1.0203 |
| 5.1598 | 0.13 | 700 | 0.3417 | 1.0283 |
| 5.1598 | 0.14 | 800 | 0.3352 | 1.0335 |
| 5.1598 | 0.16 | 900 | 0.2987 | 1.0168 |
| 0.3671 | 0.18 | 1000 | 0.2921 | 1.0159 |
| 0.3671 | 0.2 | 1100 | 0.2770 | 1.0096 |
| 0.3671 | 0.22 | 1200 | 0.2790 | 1.0398 |
| 0.3671 | 0.24 | 1300 | 0.2659 | 1.0190 |
| 0.3671 | 0.25 | 1400 | 0.2657 | 1.0528 |
| 0.289 | 0.27 | 1500 | 0.2556 | 1.0301 |
| 0.289 | 0.29 | 1600 | 0.2514 | 1.0193 |
| 0.289 | 0.31 | 1700 | 0.2708 | 1.0699 |
| 0.289 | 0.33 | 1800 | 0.2455 | 1.0723 |
| 0.289 | 0.34 | 1900 | 0.2456 | 1.0100 |
| 0.271 | 0.36 | 2000 | 0.2338 | 1.0533 |
| 0.271 | 0.38 | 2100 | 0.2479 | 1.0128 |
| 0.271 | 0.4 | 2200 | 0.2483 | 1.0386 |
| 0.271 | 0.42 | 2300 | 0.2436 | 1.0528 |
| 0.271 | 0.43 | 2400 | 0.2382 | 1.0476 |
| 0.2634 | 0.45 | 2500 | 0.2329 | 1.0680 |
| 0.2634 | 0.47 | 2600 | 0.2433 | 1.0581 |
| 0.2634 | 0.49 | 2700 | 0.2354 | 1.0641 |
| 0.2634 | 0.51 | 2800 | 0.2318 | 1.0504 |
| 0.2634 | 0.52 | 2900 | 0.2325 | 1.0500 |
| 0.2522 | 0.54 | 3000 | 0.2344 | 1.0380 |
| 0.2522 | 0.56 | 3100 | 0.2244 | 1.0663 |
| 0.2522 | 0.58 | 3200 | 0.2340 | 1.0647 |
| 0.2522 | 0.6 | 3300 | 0.2288 | 1.0538 |
| 0.2522 | 0.61 | 3400 | 0.2212 | 1.0614 |
| 0.2468 | 0.63 | 3500 | 0.2487 | 1.0557 |
| 0.2468 | 0.65 | 3600 | 0.2330 | 1.0510 |
| 0.2468 | 0.67 | 3700 | 0.2308 | 1.0506 |
| 0.2468 | 0.69 | 3800 | 0.2320 | 1.0451 |
| 0.2468 | 0.71 | 3900 | 0.2261 | 1.0701 |
| 0.2505 | 0.72 | 4000 | 0.2281 | 1.0713 |
| 0.2505 | 0.74 | 4100 | 0.2277 | 1.0741 |
| 0.2505 | 0.76 | 4200 | 0.2253 | 1.0814 |
| 0.2505 | 0.78 | 4300 | 0.2215 | 1.0437 |
| 0.2505 | 0.8 | 4400 | 0.2220 | 1.0557 |
| 0.2434 | 0.81 | 4500 | 0.2184 | 1.0533 |
| 0.2434 | 0.83 | 4600 | 0.2222 | 1.0819 |
| 0.2434 | 0.85 | 4700 | 0.2162 | 1.0238 |
| 0.2434 | 0.87 | 4800 | 0.2132 | 1.0457 |
| 0.2434 | 0.89 | 4900 | 0.2068 | 1.0611 |
| 0.2347 | 0.9 | 5000 | 0.2166 | 1.0332 |
| 0.2347 | 0.92 | 5100 | 0.2087 | 1.0433 |
| 0.2347 | 0.94 | 5200 | 0.2100 | 1.0292 |
| 0.2347 | 0.96 | 5300 | 0.2067 | 1.0734 |
| 0.2347 | 0.98 | 5400 | 0.2148 | 1.0279 |
| 0.2333 | 0.99 | 5500 | 0.2125 | 1.0277 |
| 0.2333 | 1.01 | 5600 | 0.2054 | 1.0453 |
| 0.2333 | 1.03 | 5700 | 0.2091 | 1.0557 |
| 0.2333 | 1.05 | 5800 | 0.2086 | 1.0239 |
| 0.2333 | 1.07 | 5900 | 0.2051 | 1.0645 |
| 0.2087 | 1.09 | 6000 | 0.2103 | 1.0240 |
| 0.2087 | 1.1 | 6100 | 0.2145 | 1.0197 |
| 0.2087 | 1.12 | 6200 | 0.2136 | 1.0248 |
| 0.2087 | 1.14 | 6300 | 0.2045 | 1.0443 |
| 0.2087 | 1.16 | 6400 | 0.2089 | 1.0397 |
| 0.2013 | 1.18 | 6500 | 0.2012 | 1.0654 |
| 0.2013 | 1.19 | 6600 | 0.2054 | 1.0414 |
| 0.2013 | 1.21 | 6700 | 0.2081 | 1.0632 |
| 0.2013 | 1.23 | 6800 | 0.2104 | 1.0190 |
| 0.2013 | 1.25 | 6900 | 0.2045 | 1.0813 |
| 0.2092 | 1.27 | 7000 | 0.2096 | 1.0751 |
| 0.2092 | 1.28 | 7100 | 0.2103 | 1.0328 |
| 0.2092 | 1.3 | 7200 | 0.2044 | 1.0011 |
| 0.2092 | 1.32 | 7300 | 0.2089 | 1.0260 |
| 0.2092 | 1.34 | 7400 | 0.2063 | 1.0551 |
| 0.2076 | 1.36 | 7500 | 0.2029 | 1.0075 |
| 0.2076 | 1.37 | 7600 | 0.2040 | 1.0528 |
| 0.2076 | 1.39 | 7700 | 0.2075 | 1.0398 |
| 0.2076 | 1.41 | 7800 | 0.2023 | 1.0231 |
| 0.2076 | 1.43 | 7900 | 0.2049 | 1.0318 |
| 0.2028 | 1.45 | 8000 | 0.2072 | 1.0763 |
| 0.2028 | 1.47 | 8100 | 0.2075 | 1.0762 |
| 0.2028 | 1.48 | 8200 | 0.2052 | 1.0838 |
| 0.2028 | 1.5 | 8300 | 0.2053 | 1.0407 |
| 0.2028 | 1.52 | 8400 | 0.2066 | 1.0266 |
| 0.2025 | 1.54 | 8500 | 0.2037 | 1.0628 |
| 0.2025 | 1.56 | 8600 | 0.2010 | 1.0351 |
| 0.2025 | 1.57 | 8700 | 0.1961 | 1.0812 |
| 0.2025 | 1.59 | 8800 | 0.1963 | 1.0868 |
| 0.2025 | 1.61 | 8900 | 0.2022 | 1.0710 |
| 0.1997 | 1.63 | 9000 | 0.2051 | 1.0764 |
| 0.1997 | 1.65 | 9100 | 0.1987 | 1.0581 |
| 0.1997 | 1.66 | 9200 | 0.2051 | 1.0611 |
| 0.1997 | 1.68 | 9300 | 0.1999 | 1.0808 |
| 0.1997 | 1.7 | 9400 | 0.1972 | 1.0703 |
| 0.1983 | 1.72 | 9500 | 0.1961 | 1.0584 |
| 0.1983 | 1.74 | 9600 | 0.2031 | 1.0938 |
| 0.1983 | 1.75 | 9700 | 0.2019 | 1.0891 |
| 0.1983 | 1.77 | 9800 | 0.2006 | 1.0542 |
| 0.1983 | 1.79 | 9900 | 0.1925 | 1.0627 |
| 0.1961 | 1.81 | 10000 | 0.1976 | 1.0751 |
| 0.1961 | 1.83 | 10100 | 0.2051 | 1.0611 |
| 0.1961 | 1.85 | 10200 | 0.2037 | 1.0656 |
| 0.1961 | 1.86 | 10300 | 0.2025 | 1.0291 |
| 0.1961 | 1.88 | 10400 | 0.1977 | 1.0525 |
| 0.2025 | 1.9 | 10500 | 0.2030 | 1.0670 |
| 0.2025 | 1.92 | 10600 | 0.1980 | 1.0765 |
| 0.2025 | 1.94 | 10700 | 0.1975 | 1.0254 |
| 0.2025 | 1.95 | 10800 | 0.1986 | 1.0636 |
| 0.2025 | 1.97 | 10900 | 0.1956 | 1.0352 |
| 0.2025 | 1.99 | 11000 | 0.1954 | 1.0265 |
| 0.2025 | 2.01 | 11100 | 0.1957 | 1.0752 |
| 0.2025 | 2.03 | 11200 | 0.1943 | 1.0784 |
| 0.2025 | 2.04 | 11300 | 0.1898 | 1.0341 |
| 0.2025 | 2.06 | 11400 | 0.1921 | 1.0301 |
| 0.1805 | 2.08 | 11500 | 0.1910 | 1.0230 |
| 0.1805 | 2.1 | 11600 | 0.1961 | 1.0203 |
| 0.1805 | 2.12 | 11700 | 0.1973 | 1.0776 |
| 0.1805 | 2.13 | 11800 | 0.1876 | 1.0788 |
| 0.1805 | 2.15 | 11900 | 0.1934 | 1.0251 |
| 0.177 | 2.17 | 12000 | 0.1967 | 1.0340 |
| 0.177 | 2.19 | 12100 | 0.1932 | 1.0131 |
| 0.177 | 2.21 | 12200 | 0.1926 | 1.0078 |
| 0.177 | 2.23 | 12300 | 0.1947 | 0.9991 |
| 0.177 | 2.24 | 12400 | 0.1914 | 1.0213 |
| 0.1782 | 2.26 | 12500 | 0.1962 | 0.9882 |
| 0.1782 | 2.28 | 12600 | 0.1960 | 1.0562 |
| 0.1782 | 2.3 | 12700 | 0.2006 | 1.0401 |
| 0.1782 | 2.32 | 12800 | 0.1950 | 1.0688 |
| 0.1782 | 2.33 | 12900 | 0.1920 | 1.0435 |
| 0.1796 | 2.35 | 13000 | 0.1926 | 1.0667 |
| 0.1796 | 2.37 | 13100 | 0.1949 | 1.0859 |
| 0.1796 | 2.39 | 13200 | 0.1932 | 1.0670 |
| 0.1796 | 2.41 | 13300 | 0.1882 | 1.0663 |
| 0.1796 | 2.42 | 13400 | 0.1877 | 1.0760 |
| 0.1775 | 2.44 | 13500 | 0.1893 | 1.0859 |
| 0.1775 | 2.46 | 13600 | 0.1936 | 1.0702 |
| 0.1775 | 2.48 | 13700 | 0.1871 | 1.0414 |
| 0.1775 | 2.5 | 13800 | 0.1917 | 1.0430 |
| 0.1775 | 2.51 | 13900 | 0.1922 | 1.0422 |
| 0.1778 | 2.53 | 14000 | 0.1875 | 1.0585 |
| 0.1778 | 2.55 | 14100 | 0.1876 | 1.0603 |
| 0.1778 | 2.57 | 14200 | 0.1888 | 1.0628 |
| 0.1778 | 2.59 | 14300 | 0.1948 | 1.0782 |
| 0.1778 | 2.6 | 14400 | 0.1942 | 1.0695 |
| 0.1784 | 2.62 | 14500 | 0.1842 | 1.0863 |
| 0.1784 | 2.64 | 14600 | 0.1850 | 1.0543 |
| 0.1784 | 2.66 | 14700 | 0.1824 | 1.0683 |
| 0.1784 | 2.68 | 14800 | 0.1888 | 1.0693 |
| 0.1784 | 2.7 | 14900 | 0.1871 | 1.0175 |
| 0.1753 | 2.71 | 15000 | 0.1889 | 1.0549 |
| 0.1753 | 2.73 | 15100 | 0.1865 | 1.0544 |
| 0.1753 | 2.75 | 15200 | 0.1918 | 1.0726 |
| 0.1753 | 2.77 | 15300 | 0.1964 | 1.0915 |
| 0.1753 | 2.79 | 15400 | 0.1900 | 1.0610 |
| 0.1768 | 2.8 | 15500 | 0.1894 | 1.0763 |
| 0.1768 | 2.82 | 15600 | 0.1882 | 1.0548 |
| 0.1768 | 2.84 | 15700 | 0.1861 | 1.0902 |
| 0.1768 | 2.86 | 15800 | 0.1860 | 1.0551 |
| 0.1768 | 2.88 | 15900 | 0.1879 | 1.0581 |
| 0.1761 | 2.89 | 16000 | 0.1899 | 1.0544 |
| 0.1761 | 2.91 | 16100 | 0.1860 | 1.0530 |
| 0.1761 | 2.93 | 16200 | 0.1894 | 1.0596 |
| 0.1761 | 2.95 | 16300 | 0.1835 | 1.0394 |
| 0.1761 | 2.97 | 16400 | 0.1852 | 1.0445 |
| 0.1754 | 2.98 | 16500 | 0.1847 | 1.0390 |
| 0.1754 | 3.0 | 16600 | 0.1828 | 1.0440 |
| 0.1754 | 3.02 | 16700 | 0.1869 | 1.0560 |
| 0.1754 | 3.04 | 16800 | 0.1882 | 1.0573 |
| 0.1754 | 3.06 | 16900 | 0.1912 | 1.0600 |
| 0.1592 | 3.08 | 17000 | 0.1921 | 1.0529 |
| 0.1592 | 3.09 | 17100 | 0.1881 | 1.0175 |
| 0.1592 | 3.11 | 17200 | 0.1891 | 1.0654 |
| 0.1592 | 3.13 | 17300 | 0.1889 | 1.0687 |
| 0.1592 | 3.15 | 17400 | 0.1916 | 1.0642 |
| 0.1556 | 3.17 | 17500 | 0.1850 | 1.0295 |
| 0.1556 | 3.18 | 17600 | 0.1875 | 1.0273 |
| 0.1556 | 3.2 | 17700 | 0.1894 | 1.0051 |
| 0.1556 | 3.22 | 17800 | 0.1870 | 1.0462 |
| 0.1556 | 3.24 | 17900 | 0.1831 | 1.0308 |
| 0.1557 | 3.26 | 18000 | 0.1878 | 1.0603 |
| 0.1557 | 3.27 | 18100 | 0.1850 | 1.0566 |
| 0.1557 | 3.29 | 18200 | 0.1843 | 1.0629 |
| 0.1557 | 3.31 | 18300 | 0.1886 | 1.0378 |
| 0.1557 | 3.33 | 18400 | 0.1892 | 1.0381 |
| 0.159 | 3.35 | 18500 | 0.1942 | 1.0519 |
| 0.159 | 3.36 | 18600 | 0.1829 | 1.0622 |
| 0.159 | 3.38 | 18700 | 0.1894 | 1.0557 |
| 0.159 | 3.4 | 18800 | 0.1895 | 1.0627 |
| 0.159 | 3.42 | 18900 | 0.1863 | 1.0362 |
| 0.1582 | 3.44 | 19000 | 0.1888 | 1.0491 |
| 0.1582 | 3.46 | 19100 | 0.1854 | 1.0483 |
| 0.1582 | 3.47 | 19200 | 0.1797 | 0.9787 |
| 0.1582 | 3.49 | 19300 | 0.1785 | 1.0086 |
| 0.1582 | 3.51 | 19400 | 0.1797 | 0.9915 |
| 0.1507 | 3.53 | 19500 | 0.1873 | 1.0266 |
| 0.1507 | 3.55 | 19600 | 0.1838 | 1.0299 |
| 0.1507 | 3.56 | 19700 | 0.1817 | 1.0355 |
| 0.1507 | 3.58 | 19800 | 0.1819 | 1.0271 |
| 0.1507 | 3.6 | 19900 | 0.1883 | 1.0248 |
| 0.1601 | 3.62 | 20000 | 0.1823 | 1.0406 |
| 0.1601 | 3.64 | 20100 | 0.1801 | 1.0261 |
| 0.1601 | 3.65 | 20200 | 0.1783 | 1.0329 |
| 0.1601 | 3.67 | 20300 | 0.1857 | 1.0162 |
| 0.1601 | 3.69 | 20400 | 0.1814 | 1.0212 |
| 0.1552 | 3.71 | 20500 | 0.1837 | 1.0232 |
| 0.1552 | 3.73 | 20600 | 0.1843 | 1.0314 |
| 0.1552 | 3.74 | 20700 | 0.1842 | 1.0258 |
| 0.1552 | 3.76 | 20800 | 0.1821 | 1.0479 |
| 0.1552 | 3.78 | 20900 | 0.1864 | 1.0459 |
| 0.1576 | 3.8 | 21000 | 0.1831 | 1.0364 |
| 0.1576 | 3.82 | 21100 | 0.1852 | 1.0271 |
| 0.1576 | 3.83 | 21200 | 0.1865 | 1.0204 |
| 0.1576 | 3.85 | 21300 | 0.1794 | 1.0324 |
| 0.1576 | 3.87 | 21400 | 0.1826 | 1.0315 |
| 0.1585 | 3.89 | 21500 | 0.1824 | 1.0327 |
| 0.1585 | 3.91 | 21600 | 0.1838 | 1.0208 |
| 0.1585 | 3.93 | 21700 | 0.1850 | 1.0199 |
| 0.1585 | 3.94 | 21800 | 0.1841 | 1.0050 |
| 0.1585 | 3.96 | 21900 | 0.1783 | 1.0003 |
| 0.1572 | 3.98 | 22000 | 0.1787 | 1.0115 |
| 0.1572 | 4.0 | 22100 | 0.1810 | 1.0235 |
| 0.1572 | 4.02 | 22200 | 0.1763 | 1.0191 |
| 0.1572 | 4.03 | 22300 | 0.1764 | 1.0332 |
| 0.1572 | 4.05 | 22400 | 0.1794 | 1.0429 |
| 0.1406 | 4.07 | 22500 | 0.1905 | 1.0288 |
| 0.1406 | 4.09 | 22600 | 0.1776 | 1.0244 |
| 0.1406 | 4.11 | 22700 | 0.1782 | 1.0451 |
| 0.1406 | 4.12 | 22800 | 0.1771 | 1.0387 |
| 0.1406 | 4.14 | 22900 | 0.1788 | 1.0435 |
| 0.14 | 4.16 | 23000 | 0.1792 | 1.0421 |
| 0.14 | 4.18 | 23100 | 0.1841 | 1.0241 |
| 0.14 | 4.2 | 23200 | 0.1769 | 1.0546 |
| 0.14 | 4.21 | 23300 | 0.1815 | 1.0602 |
| 0.14 | 4.23 | 23400 | 0.1784 | 1.0369 |
| 0.1394 | 4.25 | 23500 | 0.1809 | 1.0406 |
| 0.1394 | 4.27 | 23600 | 0.1744 | 1.0133 |
| 0.1394 | 4.29 | 23700 | 0.1771 | 1.0214 |
| 0.1394 | 4.31 | 23800 | 0.1765 | 1.0064 |
| 0.1394 | 4.32 | 23900 | 0.1793 | 1.0200 |
| 0.14 | 4.34 | 24000 | 0.1776 | 1.0352 |
| 0.14 | 4.36 | 24100 | 0.1775 | 1.0294 |
| 0.14 | 4.38 | 24200 | 0.1763 | 1.0213 |
| 0.14 | 4.4 | 24300 | 0.1697 | 1.0302 |
| 0.14 | 4.41 | 24400 | 0.1771 | 1.0259 |
| 0.1408 | 4.43 | 24500 | 0.1747 | 1.0409 |
| 0.1408 | 4.45 | 24600 | 0.1769 | 1.0278 |
| 0.1408 | 4.47 | 24700 | 0.1767 | 1.0190 |
| 0.1408 | 4.49 | 24800 | 0.1745 | 1.0281 |
| 0.1408 | 4.5 | 24900 | 0.1738 | 1.0356 |
| 0.1391 | 4.52 | 25000 | 0.1781 | 1.0429 |
| 0.1391 | 4.54 | 25100 | 0.1784 | 1.0076 |
| 0.1391 | 4.56 | 25200 | 0.1771 | 1.0157 |
| 0.1391 | 4.58 | 25300 | 0.1758 | 1.0337 |
| 0.1391 | 4.59 | 25400 | 0.1758 | 1.0466 |
| 0.1398 | 4.61 | 25500 | 0.1724 | 1.0403 |
| 0.1398 | 4.63 | 25600 | 0.1765 | 1.0481 |
| 0.1398 | 4.65 | 25700 | 0.1757 | 1.0320 |
| 0.1398 | 4.67 | 25800 | 0.1814 | 1.0479 |
| 0.1398 | 4.69 | 25900 | 0.1713 | 1.0251 |
| 0.1427 | 4.7 | 26000 | 0.1735 | 1.0340 |
| 0.1427 | 4.72 | 26100 | 0.1765 | 1.0358 |
| 0.1427 | 4.74 | 26200 | 0.1731 | 1.0220 |
| 0.1427 | 4.76 | 26300 | 0.1769 | 1.0261 |
| 0.1427 | 4.78 | 26400 | 0.1747 | 1.0139 |
| 0.1424 | 4.79 | 26500 | 0.1791 | 1.0406 |
| 0.1424 | 4.81 | 26600 | 0.1735 | 1.0497 |
| 0.1424 | 4.83 | 26700 | 0.1710 | 1.0433 |
| 0.1424 | 4.85 | 26800 | 0.1771 | 1.0002 |
| 0.1424 | 4.87 | 26900 | 0.1748 | 1.0046 |
| 0.1419 | 4.88 | 27000 | 0.1794 | 1.0332 |
| 0.1419 | 4.9 | 27100 | 0.1772 | 1.0558 |
| 0.1419 | 4.92 | 27200 | 0.1757 | 1.0477 |
| 0.1419 | 4.94 | 27300 | 0.1735 | 1.0324 |
| 0.1419 | 4.96 | 27400 | 0.1758 | 1.0260 |
| 0.1433 | 4.97 | 27500 | 0.1767 | 1.0422 |
| 0.1433 | 4.99 | 27600 | 0.1695 | 1.0386 |
| 0.1433 | 5.01 | 27700 | 0.1763 | 1.0571 |
| 0.1433 | 5.03 | 27800 | 0.1743 | 1.0367 |
| 0.1433 | 5.05 | 27900 | 0.1804 | 1.0255 |
| 0.1306 | 5.07 | 28000 | 0.1803 | 1.0377 |
| 0.1306 | 5.08 | 28100 | 0.1750 | 1.0552 |
| 0.1306 | 5.1 | 28200 | 0.1743 | 1.0512 |
| 0.1306 | 5.12 | 28300 | 0.1777 | 1.0584 |
| 0.1306 | 5.14 | 28400 | 0.1726 | 1.0374 |
| 0.123 | 5.16 | 28500 | 0.1776 | 1.0439 |
| 0.123 | 5.17 | 28600 | 0.1759 | 1.0682 |
| 0.123 | 5.19 | 28700 | 0.1724 | 1.0511 |
| 0.123 | 5.21 | 28800 | 0.1677 | 1.0560 |
| 0.123 | 5.23 | 28900 | 0.1699 | 1.0421 |
| 0.1217 | 5.25 | 29000 | 0.1803 | 1.0370 |
| 0.1217 | 5.26 | 29100 | 0.1770 | 1.0474 |
| 0.1217 | 5.28 | 29200 | 0.1733 | 1.0332 |
| 0.1217 | 5.3 | 29300 | 0.1746 | 1.0158 |
| 0.1217 | 5.32 | 29400 | 0.1763 | 1.0341 |
| 0.1246 | 5.34 | 29500 | 0.1775 | 1.0348 |
| 0.1246 | 5.35 | 29600 | 0.1730 | 1.0492 |
| 0.1246 | 5.37 | 29700 | 0.1730 | 1.0503 |
| 0.1246 | 5.39 | 29800 | 0.1727 | 1.0437 |
| 0.1246 | 5.41 | 29900 | 0.1744 | 1.0539 |
| 0.127 | 5.43 | 30000 | 0.1748 | 1.0463 |
| 0.127 | 5.44 | 30100 | 0.1746 | 1.0555 |
| 0.127 | 5.46 | 30200 | 0.1810 | 1.0558 |
| 0.127 | 5.48 | 30300 | 0.1773 | 1.0407 |
| 0.127 | 5.5 | 30400 | 0.1722 | 1.0489 |
| 0.1276 | 5.52 | 30500 | 0.1720 | 1.0520 |
| 0.1276 | 5.54 | 30600 | 0.1777 | 1.0347 |
| 0.1276 | 5.55 | 30700 | 0.1685 | 1.0347 |
| 0.1276 | 5.57 | 30800 | 0.1659 | 1.0338 |
| 0.1276 | 5.59 | 30900 | 0.1756 | 1.0228 |
| 0.1246 | 5.61 | 31000 | 0.1717 | 1.0409 |
| 0.1246 | 5.63 | 31100 | 0.1764 | 1.0202 |
| 0.1246 | 5.64 | 31200 | 0.1693 | 1.0314 |
| 0.1246 | 5.66 | 31300 | 0.1731 | 1.0319 |
| 0.1246 | 5.68 | 31400 | 0.1688 | 1.0380 |
| 0.1271 | 5.7 | 31500 | 0.1671 | 1.0350 |
| 0.1271 | 5.72 | 31600 | 0.1676 | 1.0430 |
| 0.1271 | 5.73 | 31700 | 0.1656 | 1.0441 |
| 0.1271 | 5.75 | 31800 | 0.1664 | 1.0403 |
| 0.1271 | 5.77 | 31900 | 0.1691 | 1.0152 |
| 0.1259 | 5.79 | 32000 | 0.1702 | 1.0018 |
| 0.1259 | 5.81 | 32100 | 0.1664 | 1.0246 |
| 0.1259 | 5.82 | 32200 | 0.1737 | 1.0340 |
| 0.1259 | 5.84 | 32300 | 0.1742 | 1.0449 |
| 0.1259 | 5.86 | 32400 | 0.1707 | 1.0279 |
| 0.1273 | 5.88 | 32500 | 0.1697 | 1.0471 |
| 0.1273 | 5.9 | 32600 | 0.1668 | 1.0322 |
| 0.1273 | 5.92 | 32700 | 0.1706 | 1.0378 |
| 0.1273 | 5.93 | 32800 | 0.1704 | 1.0350 |
| 0.1273 | 5.95 | 32900 | 0.1725 | 1.0244 |
| 0.123 | 5.97 | 33000 | 0.1678 | 1.0447 |
| 0.123 | 5.99 | 33100 | 0.1681 | 1.0438 |
| 0.123 | 6.01 | 33200 | 0.1689 | 1.0297 |
| 0.123 | 6.02 | 33300 | 0.1690 | 1.0333 |
| 0.123 | 6.04 | 33400 | 0.1734 | 1.0296 |
| 0.1163 | 6.06 | 33500 | 0.1748 | 1.0307 |
| 0.1163 | 6.08 | 33600 | 0.1715 | 1.0123 |
| 0.1163 | 6.1 | 33700 | 0.1668 | 1.0117 |
| 0.1163 | 6.11 | 33800 | 0.1690 | 1.0230 |
| 0.1163 | 6.13 | 33900 | 0.1693 | 1.0166 |
| 0.1101 | 6.15 | 34000 | 0.1728 | 1.0162 |
| 0.1101 | 6.17 | 34100 | 0.1683 | 1.0107 |
| 0.1101 | 6.19 | 34200 | 0.1703 | 0.9814 |
| 0.1101 | 6.2 | 34300 | 0.1692 | 1.0007 |
| 0.1101 | 6.22 | 34400 | 0.1690 | 1.0000 |
| 0.1118 | 6.24 | 34500 | 0.1734 | 0.9972 |
| 0.1118 | 6.26 | 34600 | 0.1739 | 1.0096 |
| 0.1118 | 6.28 | 34700 | 0.1749 | 1.0047 |
| 0.1118 | 6.3 | 34800 | 0.1709 | 1.0111 |
| 0.1118 | 6.31 | 34900 | 0.1717 | 1.0179 |
| 0.1153 | 6.33 | 35000 | 0.1690 | 1.0155 |
| 0.1153 | 6.35 | 35100 | 0.1710 | 1.0144 |
| 0.1153 | 6.37 | 35200 | 0.1719 | 1.0030 |
| 0.1153 | 6.39 | 35300 | 0.1690 | 1.0272 |
| 0.1153 | 6.4 | 35400 | 0.1673 | 1.0103 |
| 0.1106 | 6.42 | 35500 | 0.1710 | 1.0222 |
| 0.1106 | 6.44 | 35600 | 0.1747 | 1.0173 |
| 0.1106 | 6.46 | 35700 | 0.1721 | 0.9933 |
| 0.1106 | 6.48 | 35800 | 0.1670 | 1.0184 |
| 0.1106 | 6.49 | 35900 | 0.1714 | 1.0122 |
| 0.1116 | 6.51 | 36000 | 0.1717 | 1.0035 |
| 0.1116 | 6.53 | 36100 | 0.1685 | 1.0099 |
| 0.1116 | 6.55 | 36200 | 0.1687 | 1.0288 |
| 0.1116 | 6.57 | 36300 | 0.1664 | 1.0314 |
| 0.1116 | 6.58 | 36400 | 0.1665 | 1.0264 |
| 0.1128 | 6.6 | 36500 | 0.1681 | 1.0420 |
| 0.1128 | 6.62 | 36600 | 0.1682 | 1.0409 |
| 0.1128 | 6.64 | 36700 | 0.1717 | 1.0271 |
| 0.1128 | 6.66 | 36800 | 0.1717 | 1.0166 |
| 0.1128 | 6.68 | 36900 | 0.1755 | 1.0175 |
| 0.1134 | 6.69 | 37000 | 0.1623 | 1.0185 |
| 0.1134 | 6.71 | 37100 | 0.1674 | 1.0302 |
| 0.1134 | 6.73 | 37200 | 0.1633 | 1.0325 |
| 0.1134 | 6.75 | 37300 | 0.1628 | 1.0228 |
| 0.1134 | 6.77 | 37400 | 0.1636 | 1.0243 |
| 0.1102 | 6.78 | 37500 | 0.1667 | 1.0282 |
| 0.1102 | 6.8 | 37600 | 0.1623 | 1.0212 |
| 0.1102 | 6.82 | 37700 | 0.1639 | 1.0140 |
| 0.1102 | 6.84 | 37800 | 0.1587 | 1.0258 |
| 0.1102 | 6.86 | 37900 | 0.1610 | 1.0087 |
| 0.1113 | 6.87 | 38000 | 0.1647 | 1.0199 |
| 0.1113 | 6.89 | 38100 | 0.1609 | 1.0054 |
| 0.1113 | 6.91 | 38200 | 0.1602 | 1.0145 |
| 0.1113 | 6.93 | 38300 | 0.1602 | 1.0144 |
| 0.1113 | 6.95 | 38400 | 0.1602 | 1.0375 |
| 0.1071 | 6.96 | 38500 | 0.1592 | 1.0259 |
| 0.1071 | 6.98 | 38600 | 0.1612 | 1.0236 |
| 0.1071 | 7.0 | 38700 | 0.1621 | 1.0277 |
| 0.1071 | 7.02 | 38800 | 0.1669 | 1.0367 |
| 0.1071 | 7.04 | 38900 | 0.1742 | 1.0484 |
| 0.1062 | 7.05 | 39000 | 0.1752 | 1.0302 |
| 0.1062 | 7.07 | 39100 | 0.1676 | 1.0244 |
| 0.1062 | 7.09 | 39200 | 0.1723 | 1.0300 |
| 0.1062 | 7.11 | 39300 | 0.1727 | 1.0294 |
| 0.1062 | 7.13 | 39400 | 0.1711 | 1.0255 |
| 0.1021 | 7.15 | 39500 | 0.1699 | 1.0471 |
| 0.1021 | 7.16 | 39600 | 0.1682 | 1.0426 |
| 0.1021 | 7.18 | 39700 | 0.1713 | 1.0233 |
| 0.1021 | 7.2 | 39800 | 0.1682 | 1.0259 |
| 0.1021 | 7.22 | 39900 | 0.1710 | 1.0162 |
| 0.103 | 7.24 | 40000 | 0.1725 | 1.0283 |
| 0.103 | 7.25 | 40100 | 0.1729 | 1.0264 |
| 0.103 | 7.27 | 40200 | 0.1665 | 1.0451 |
| 0.103 | 7.29 | 40300 | 0.1671 | 1.0386 |
| 0.103 | 7.31 | 40400 | 0.1671 | 1.0316 |
| 0.0981 | 7.33 | 40500 | 0.1708 | 1.0257 |
| 0.0981 | 7.34 | 40600 | 0.1642 | 1.0152 |
| 0.0981 | 7.36 | 40700 | 0.1707 | 1.0110 |
| 0.0981 | 7.38 | 40800 | 0.1675 | 1.0186 |
| 0.0981 | 7.4 | 40900 | 0.1702 | 1.0123 |
| 0.1005 | 7.42 | 41000 | 0.1699 | 1.0159 |
| 0.1005 | 7.43 | 41100 | 0.1703 | 1.0219 |
| 0.1005 | 7.45 | 41200 | 0.1707 | 1.0194 |
| 0.1005 | 7.47 | 41300 | 0.1644 | 1.0016 |
| 0.1005 | 7.49 | 41400 | 0.1716 | 0.9941 |
| 0.1021 | 7.51 | 41500 | 0.1670 | 1.0159 |
| 0.1021 | 7.53 | 41600 | 0.1667 | 1.0033 |
| 0.1021 | 7.54 | 41700 | 0.1667 | 1.0176 |
| 0.1021 | 7.56 | 41800 | 0.1679 | 1.0194 |
| 0.1021 | 7.58 | 41900 | 0.1632 | 1.0418 |
| 0.0963 | 7.6 | 42000 | 0.1712 | 1.0152 |
| 0.0963 | 7.62 | 42100 | 0.1632 | 1.0364 |
| 0.0963 | 7.63 | 42200 | 0.1702 | 1.0229 |
| 0.0963 | 7.65 | 42300 | 0.1655 | 1.0179 |
| 0.0963 | 7.67 | 42400 | 0.1698 | 1.0329 |
| 0.1014 | 7.69 | 42500 | 0.1691 | 1.0398 |
| 0.1014 | 7.71 | 42600 | 0.1638 | 1.0487 |
| 0.1014 | 7.72 | 42700 | 0.1617 | 1.0210 |
| 0.1014 | 7.74 | 42800 | 0.1648 | 1.0124 |
| 0.1014 | 7.76 | 42900 | 0.1608 | 1.0202 |
| 0.1008 | 7.78 | 43000 | 0.1611 | 1.0353 |
| 0.1008 | 7.8 | 43100 | 0.1633 | 1.0319 |
| 0.1008 | 7.81 | 43200 | 0.1640 | 1.0032 |
| 0.1008 | 7.83 | 43300 | 0.1589 | 0.9985 |
| 0.1008 | 7.85 | 43400 | 0.1630 | 0.9975 |
| 0.0988 | 7.87 | 43500 | 0.1604 | 1.0053 |
| 0.0988 | 7.89 | 43600 | 0.1687 | 1.0063 |
| 0.0988 | 7.91 | 43700 | 0.1619 | 1.0096 |
| 0.0988 | 7.92 | 43800 | 0.1565 | 0.9901 |
| 0.0988 | 7.94 | 43900 | 0.1619 | 0.9742 |
| 0.102 | 7.96 | 44000 | 0.1598 | 0.9593 |
| 0.102 | 7.98 | 44100 | 0.1635 | 0.9718 |
| 0.102 | 8.0 | 44200 | 0.1624 | 0.9903 |
| 0.102 | 8.01 | 44300 | 0.1605 | 0.9882 |
| 0.102 | 8.03 | 44400 | 0.1657 | 1.0128 |
| 0.0961 | 8.05 | 44500 | 0.1651 | 1.0155 |
| 0.0961 | 8.07 | 44600 | 0.1680 | 1.0194 |
| 0.0961 | 8.09 | 44700 | 0.1694 | 1.0112 |
| 0.0961 | 8.1 | 44800 | 0.1665 | 1.0073 |
| 0.0961 | 8.12 | 44900 | 0.1612 | 1.0200 |
| 0.0894 | 8.14 | 45000 | 0.1652 | 1.0337 |
| 0.0894 | 8.16 | 45100 | 0.1626 | 1.0086 |
| 0.0894 | 8.18 | 45200 | 0.1639 | 1.0083 |
| 0.0894 | 8.19 | 45300 | 0.1634 | 1.0223 |
| 0.0894 | 8.21 | 45400 | 0.1631 | 1.0339 |
| 0.0887 | 8.23 | 45500 | 0.1640 | 1.0311 |
| 0.0887 | 8.25 | 45600 | 0.1661 | 1.0264 |
| 0.0887 | 8.27 | 45700 | 0.1650 | 1.0315 |
| 0.0887 | 8.29 | 45800 | 0.1624 | 1.0390 |
| 0.0887 | 8.3 | 45900 | 0.1624 | 1.0350 |
| 0.0884 | 8.32 | 46000 | 0.1615 | 1.0318 |
| 0.0884 | 8.34 | 46100 | 0.1628 | 1.0410 |
| 0.0884 | 8.36 | 46200 | 0.1627 | 1.0429 |
| 0.0884 | 8.38 | 46300 | 0.1644 | 1.0320 |
| 0.0884 | 8.39 | 46400 | 0.1633 | 1.0177 |
| 0.0893 | 8.41 | 46500 | 0.1654 | 1.0189 |
| 0.0893 | 8.43 | 46600 | 0.1598 | 1.0154 |
| 0.0893 | 8.45 | 46700 | 0.1618 | 1.0250 |
| 0.0893 | 8.47 | 46800 | 0.1639 | 1.0402 |
| 0.0893 | 8.48 | 46900 | 0.1616 | 1.0336 |
| 0.0869 | 8.5 | 47000 | 0.1613 | 1.0296 |
| 0.0869 | 8.52 | 47100 | 0.1648 | 1.0568 |
| 0.0869 | 8.54 | 47200 | 0.1625 | 1.0256 |
| 0.0869 | 8.56 | 47300 | 0.1609 | 1.0390 |
| 0.0869 | 8.57 | 47400 | 0.1606 | 1.0450 |
| 0.0894 | 8.59 | 47500 | 0.1605 | 1.0445 |
| 0.0894 | 8.61 | 47600 | 0.1660 | 1.0402 |
| 0.0894 | 8.63 | 47700 | 0.1618 | 1.0444 |
| 0.0894 | 8.65 | 47800 | 0.1669 | 1.0333 |
| 0.0894 | 8.66 | 47900 | 0.1627 | 1.0364 |
| 0.0885 | 8.68 | 48000 | 0.1616 | 1.0334 |
| 0.0885 | 8.7 | 48100 | 0.1626 | 1.0564 |
| 0.0885 | 8.72 | 48200 | 0.1624 | 1.0396 |
| 0.0885 | 8.74 | 48300 | 0.1623 | 1.0396 |
| 0.0885 | 8.76 | 48400 | 0.1612 | 1.0112 |
| 0.0888 | 8.77 | 48500 | 0.1638 | 1.0292 |
| 0.0888 | 8.79 | 48600 | 0.1639 | 0.9988 |
| 0.0888 | 8.81 | 48700 | 0.1618 | 1.0127 |
| 0.0888 | 8.83 | 48800 | 0.1584 | 1.0042 |
| 0.0888 | 8.85 | 48900 | 0.1615 | 1.0041 |
| 0.0887 | 8.86 | 49000 | 0.1637 | 1.0269 |
| 0.0887 | 8.88 | 49100 | 0.1627 | 0.9989 |
| 0.0887 | 8.9 | 49200 | 0.1583 | 1.0104 |
| 0.0887 | 8.92 | 49300 | 0.1600 | 1.0214 |
| 0.0887 | 8.94 | 49400 | 0.1599 | 1.0126 |
| 0.0893 | 8.95 | 49500 | 0.1595 | 1.0516 |
| 0.0893 | 8.97 | 49600 | 0.1625 | 1.0464 |
| 0.0893 | 8.99 | 49700 | 0.1595 | 1.0361 |
| 0.0893 | 9.01 | 49800 | 0.1614 | 1.0469 |
| 0.0893 | 9.03 | 49900 | 0.1612 | 1.0304 |
| 0.0834 | 9.04 | 50000 | 0.1643 | 1.0335 |
| 0.0834 | 9.06 | 50100 | 0.1640 | 1.0175 |
| 0.0834 | 9.08 | 50200 | 0.1655 | 1.0264 |
| 0.0834 | 9.1 | 50300 | 0.1678 | 1.0243 |
| 0.0834 | 9.12 | 50400 | 0.1659 | 1.0145 |
| 0.079 | 9.14 | 50500 | 0.1644 | 1.0316 |
| 0.079 | 9.15 | 50600 | 0.1630 | 1.0326 |
| 0.079 | 9.17 | 50700 | 0.1634 | 1.0154 |
| 0.079 | 9.19 | 50800 | 0.1697 | 1.0095 |
| 0.079 | 9.21 | 50900 | 0.1678 | 1.0050 |
| 0.078 | 9.23 | 51000 | 0.1626 | 1.0159 |
| 0.078 | 9.24 | 51100 | 0.1666 | 1.0238 |
| 0.078 | 9.26 | 51200 | 0.1644 | 1.0244 |
| 0.078 | 9.28 | 51300 | 0.1655 | 1.0345 |
| 0.078 | 9.3 | 51400 | 0.1615 | 1.0237 |
| 0.0776 | 9.32 | 51500 | 0.1664 | 1.0180 |
| 0.0776 | 9.33 | 51600 | 0.1603 | 1.0208 |
| 0.0776 | 9.35 | 51700 | 0.1594 | 1.0230 |
| 0.0776 | 9.37 | 51800 | 0.1622 | 1.0201 |
| 0.0776 | 9.39 | 51900 | 0.1596 | 1.0039 |
| 0.0782 | 9.41 | 52000 | 0.1645 | 1.0204 |
| 0.0782 | 9.42 | 52100 | 0.1640 | 1.0318 |
| 0.0782 | 9.44 | 52200 | 0.1621 | 1.0290 |
| 0.0782 | 9.46 | 52300 | 0.1638 | 1.0318 |
| 0.0782 | 9.48 | 52400 | 0.1613 | 1.0217 |
| 0.0782 | 9.5 | 52500 | 0.1609 | 1.0261 |
| 0.0782 | 9.52 | 52600 | 0.1625 | 1.0101 |
| 0.0782 | 9.53 | 52700 | 0.1613 | 1.0058 |
| 0.0782 | 9.55 | 52800 | 0.1599 | 1.0068 |
| 0.0782 | 9.57 | 52900 | 0.1600 | 1.0110 |
| 0.0797 | 9.59 | 53000 | 0.1594 | 1.0171 |
| 0.0797 | 9.61 | 53100 | 0.1583 | 1.0124 |
| 0.0797 | 9.62 | 53200 | 0.1646 | 1.0093 |
| 0.0797 | 9.64 | 53300 | 0.1580 | 1.0201 |
| 0.0797 | 9.66 | 53400 | 0.1599 | 1.0207 |
| 0.0783 | 9.68 | 53500 | 0.1577 | 1.0226 |
| 0.0783 | 9.7 | 53600 | 0.1593 | 1.0160 |
| 0.0783 | 9.71 | 53700 | 0.1570 | 1.0173 |
| 0.0783 | 9.73 | 53800 | 0.1614 | 1.0299 |
| 0.0783 | 9.75 | 53900 | 0.1610 | 1.0184 |
| 0.0779 | 9.77 | 54000 | 0.1606 | 1.0173 |
| 0.0779 | 9.79 | 54100 | 0.1577 | 1.0032 |
| 0.0779 | 9.8 | 54200 | 0.1590 | 1.0070 |
| 0.0779 | 9.82 | 54300 | 0.1580 | 1.0257 |
| 0.0779 | 9.84 | 54400 | 0.1592 | 1.0108 |
| 0.0778 | 9.86 | 54500 | 0.1617 | 0.9907 |
| 0.0778 | 9.88 | 54600 | 0.1605 | 1.0189 |
| 0.0778 | 9.89 | 54700 | 0.1605 | 1.0177 |
| 0.0778 | 9.91 | 54800 | 0.1536 | 1.0275 |
| 0.0778 | 9.93 | 54900 | 0.1658 | 1.0282 |
| 0.0777 | 9.95 | 55000 | 0.1543 | 1.0385 |
| 0.0777 | 9.97 | 55100 | 0.1559 | 1.0375 |
| 0.0777 | 9.99 | 55200 | 0.1590 | 1.0215 |
| 0.0777 | 10.0 | 55300 | 0.1624 | 1.0242 |
| 0.0777 | 10.02 | 55400 | 0.1635 | 1.0244 |
| 0.0712 | 10.04 | 55500 | 0.1629 | 1.0298 |
| 0.0712 | 10.06 | 55600 | 0.1601 | 1.0299 |
| 0.0712 | 10.08 | 55700 | 0.1625 | 1.0117 |
| 0.0712 | 10.09 | 55800 | 0.1650 | 1.0233 |
| 0.0712 | 10.11 | 55900 | 0.1631 | 1.0061 |
| 0.0667 | 10.13 | 56000 | 0.1637 | 1.0226 |
| 0.0667 | 10.15 | 56100 | 0.1607 | 1.0042 |
| 0.0667 | 10.17 | 56200 | 0.1599 | 1.0117 |
| 0.0667 | 10.18 | 56300 | 0.1623 | 1.0246 |
| 0.0667 | 10.2 | 56400 | 0.1639 | 1.0294 |
| 0.0695 | 10.22 | 56500 | 0.1650 | 1.0232 |
| 0.0695 | 10.24 | 56600 | 0.1620 | 1.0289 |
| 0.0695 | 10.26 | 56700 | 0.1667 | 1.0209 |
| 0.0695 | 10.27 | 56800 | 0.1580 | 1.0163 |
| 0.0695 | 10.29 | 56900 | 0.1646 | 1.0293 |
| 0.0686 | 10.31 | 57000 | 0.1636 | 1.0106 |
| 0.0686 | 10.33 | 57100 | 0.1586 | 1.0044 |
| 0.0686 | 10.35 | 57200 | 0.1582 | 1.0213 |
| 0.0686 | 10.37 | 57300 | 0.1627 | 1.0151 |
| 0.0686 | 10.38 | 57400 | 0.1619 | 1.0248 |
| 0.0686 | 10.4 | 57500 | 0.1596 | 1.0098 |
| 0.0686 | 10.42 | 57600 | 0.1606 | 1.0031 |
| 0.0686 | 10.44 | 57700 | 0.1620 | 1.0046 |
| 0.0686 | 10.46 | 57800 | 0.1592 | 1.0018 |
| 0.0686 | 10.47 | 57900 | 0.1592 | 1.0058 |
| 0.0669 | 10.49 | 58000 | 0.1605 | 0.9961 |
| 0.0669 | 10.51 | 58100 | 0.1632 | 1.0102 |
| 0.0669 | 10.53 | 58200 | 0.1593 | 1.0061 |
| 0.0669 | 10.55 | 58300 | 0.1586 | 1.0091 |
| 0.0669 | 10.56 | 58400 | 0.1603 | 1.0085 |
| 0.068 | 10.58 | 58500 | 0.1579 | 1.0031 |
| 0.068 | 10.6 | 58600 | 0.1591 | 1.0021 |
| 0.068 | 10.62 | 58700 | 0.1590 | 1.0163 |
| 0.068 | 10.64 | 58800 | 0.1584 | 1.0045 |
| 0.068 | 10.65 | 58900 | 0.1594 | 1.0158 |
| 0.0693 | 10.67 | 59000 | 0.1568 | 1.0052 |
| 0.0693 | 10.69 | 59100 | 0.1581 | 0.9955 |
| 0.0693 | 10.71 | 59200 | 0.1622 | 0.9917 |
| 0.0693 | 10.73 | 59300 | 0.1580 | 1.0018 |
| 0.0693 | 10.75 | 59400 | 0.1601 | 1.0077 |
| 0.0699 | 10.76 | 59500 | 0.1605 | 0.9997 |
| 0.0699 | 10.78 | 59600 | 0.1585 | 1.0009 |
| 0.0699 | 10.8 | 59700 | 0.1541 | 1.0058 |
| 0.0699 | 10.82 | 59800 | 0.1583 | 1.0026 |
| 0.0699 | 10.84 | 59900 | 0.1592 | 0.9992 |
| 0.0671 | 10.85 | 60000 | 0.1590 | 1.0004 |
| 0.0671 | 10.87 | 60100 | 0.1585 | 1.0060 |
| 0.0671 | 10.89 | 60200 | 0.1579 | 1.0063 |
| 0.0671 | 10.91 | 60300 | 0.1582 | 0.9949 |
| 0.0671 | 10.93 | 60400 | 0.1562 | 1.0004 |
| 0.0661 | 10.94 | 60500 | 0.1560 | 0.9950 |
| 0.0661 | 10.96 | 60600 | 0.1564 | 0.9990 |
| 0.0661 | 10.98 | 60700 | 0.1552 | 0.9982 |
| 0.0661 | 11.0 | 60800 | 0.1596 | 1.0018 |
| 0.0661 | 11.02 | 60900 | 0.1618 | 0.9905 |
| 0.0634 | 11.03 | 61000 | 0.1652 | 0.9890 |
| 0.0634 | 11.05 | 61100 | 0.1649 | 0.9886 |
| 0.0634 | 11.07 | 61200 | 0.1668 | 0.9870 |
| 0.0634 | 11.09 | 61300 | 0.1663 | 0.9921 |
| 0.0634 | 11.11 | 61400 | 0.1650 | 0.9919 |
| 0.0587 | 11.13 | 61500 | 0.1674 | 0.9831 |
| 0.0587 | 11.14 | 61600 | 0.1633 | 0.9793 |
| 0.0587 | 11.16 | 61700 | 0.1665 | 0.9781 |
| 0.0587 | 11.18 | 61800 | 0.1642 | 0.9821 |
| 0.0587 | 11.2 | 61900 | 0.1638 | 0.9797 |
| 0.0581 | 11.22 | 62000 | 0.1628 | 0.9727 |
| 0.0581 | 11.23 | 62100 | 0.1661 | 0.9796 |
| 0.0581 | 11.25 | 62200 | 0.1641 | 0.9830 |
| 0.0581 | 11.27 | 62300 | 0.1601 | 0.9867 |
| 0.0581 | 11.29 | 62400 | 0.1626 | 0.9757 |
| 0.0584 | 11.31 | 62500 | 0.1632 | 1.0014 |
| 0.0584 | 11.32 | 62600 | 0.1626 | 1.0052 |
| 0.0584 | 11.34 | 62700 | 0.1586 | 1.0098 |
| 0.0584 | 11.36 | 62800 | 0.1597 | 1.0151 |
| 0.0584 | 11.38 | 62900 | 0.1624 | 1.0054 |
| 0.0589 | 11.4 | 63000 | 0.1618 | 1.0018 |
| 0.0589 | 11.41 | 63100 | 0.1635 | 1.0032 |
| 0.0589 | 11.43 | 63200 | 0.1654 | 1.0142 |
| 0.0589 | 11.45 | 63300 | 0.1646 | 1.0031 |
| 0.0589 | 11.47 | 63400 | 0.1618 | 1.0118 |
| 0.0579 | 11.49 | 63500 | 0.1634 | 1.0218 |
| 0.0579 | 11.51 | 63600 | 0.1616 | 1.0179 |
| 0.0579 | 11.52 | 63700 | 0.1603 | 1.0036 |
| 0.0579 | 11.54 | 63800 | 0.1610 | 1.0150 |
| 0.0579 | 11.56 | 63900 | 0.1605 | 1.0285 |
| 0.0572 | 11.58 | 64000 | 0.1621 | 1.0261 |
| 0.0572 | 11.6 | 64100 | 0.1625 | 1.0252 |
| 0.0572 | 11.61 | 64200 | 0.1677 | 1.0257 |
| 0.0572 | 11.63 | 64300 | 0.1656 | 1.0243 |
| 0.0572 | 11.65 | 64400 | 0.1669 | 1.0270 |
| 0.0592 | 11.67 | 64500 | 0.1605 | 1.0305 |
| 0.0592 | 11.69 | 64600 | 0.1633 | 1.0277 |
| 0.0592 | 11.7 | 64700 | 0.1606 | 1.0176 |
| 0.0592 | 11.72 | 64800 | 0.1618 | 1.0249 |
| 0.0592 | 11.74 | 64900 | 0.1609 | 1.0113 |
| 0.0595 | 11.76 | 65000 | 0.1609 | 1.0254 |
| 0.0595 | 11.78 | 65100 | 0.1662 | 1.0275 |
| 0.0595 | 11.79 | 65200 | 0.1652 | 1.0164 |
| 0.0595 | 11.81 | 65300 | 0.1638 | 1.0266 |
| 0.0595 | 11.83 | 65400 | 0.1589 | 1.0274 |
| 0.0588 | 11.85 | 65500 | 0.1607 | 1.0136 |
| 0.0588 | 11.87 | 65600 | 0.1592 | 1.0136 |
| 0.0588 | 11.88 | 65700 | 0.1581 | 1.0183 |
| 0.0588 | 11.9 | 65800 | 0.1587 | 1.0133 |
| 0.0588 | 11.92 | 65900 | 0.1596 | 1.0170 |
| 0.0558 | 11.94 | 66000 | 0.1590 | 1.0161 |
| 0.0558 | 11.96 | 66100 | 0.1597 | 1.0193 |
| 0.0558 | 11.98 | 66200 | 0.1590 | 1.0193 |
| 0.0558 | 11.99 | 66300 | 0.1608 | 1.0242 |
| 0.0558 | 12.01 | 66400 | 0.1642 | 1.0231 |
| 0.0555 | 12.03 | 66500 | 0.1679 | 1.0168 |
| 0.0555 | 12.05 | 66600 | 0.1674 | 1.0083 |
| 0.0555 | 12.07 | 66700 | 0.1658 | 1.0069 |
| 0.0555 | 12.08 | 66800 | 0.1661 | 1.0134 |
| 0.0555 | 12.1 | 66900 | 0.1682 | 1.0274 |
| 0.0508 | 12.12 | 67000 | 0.1702 | 1.0219 |
| 0.0508 | 12.14 | 67100 | 0.1694 | 1.0219 |
| 0.0508 | 12.16 | 67200 | 0.1667 | 1.0236 |
| 0.0508 | 12.17 | 67300 | 0.1672 | 1.0253 |
| 0.0508 | 12.19 | 67400 | 0.1640 | 1.0215 |
| 0.0513 | 12.21 | 67500 | 0.1649 | 1.0242 |
| 0.0513 | 12.23 | 67600 | 0.1687 | 1.0262 |
| 0.0513 | 12.25 | 67700 | 0.1655 | 1.0231 |
| 0.0513 | 12.26 | 67800 | 0.1692 | 1.0176 |
| 0.0513 | 12.28 | 67900 | 0.1675 | 1.0202 |
| 0.0519 | 12.3 | 68000 | 0.1644 | 1.0241 |
| 0.0519 | 12.32 | 68100 | 0.1651 | 1.0297 |
| 0.0519 | 12.34 | 68200 | 0.1661 | 1.0287 |
| 0.0519 | 12.36 | 68300 | 0.1665 | 1.0257 |
| 0.0519 | 12.37 | 68400 | 0.1685 | 1.0233 |
| 0.0522 | 12.39 | 68500 | 0.1636 | 1.0177 |
| 0.0522 | 12.41 | 68600 | 0.1709 | 1.0200 |
| 0.0522 | 12.43 | 68700 | 0.1684 | 1.0164 |
| 0.0522 | 12.45 | 68800 | 0.1666 | 1.0119 |
| 0.0522 | 12.46 | 68900 | 0.1683 | 1.0136 |
| 0.05 | 12.48 | 69000 | 0.1696 | 1.0127 |
| 0.05 | 12.5 | 69100 | 0.1708 | 1.0184 |
| 0.05 | 12.52 | 69200 | 0.1654 | 1.0282 |
| 0.05 | 12.54 | 69300 | 0.1700 | 1.0235 |
| 0.05 | 12.55 | 69400 | 0.1688 | 1.0257 |
| 0.0513 | 12.57 | 69500 | 0.1646 | 1.0274 |
| 0.0513 | 12.59 | 69600 | 0.1660 | 1.0247 |
| 0.0513 | 12.61 | 69700 | 0.1657 | 1.0188 |
| 0.0513 | 12.63 | 69800 | 0.1654 | 1.0087 |
| 0.0513 | 12.64 | 69900 | 0.1681 | 1.0146 |
| 0.0512 | 12.66 | 70000 | 0.1660 | 1.0185 |
| 0.0512 | 12.68 | 70100 | 0.1690 | 1.0214 |
| 0.0512 | 12.7 | 70200 | 0.1683 | 1.0160 |
| 0.0512 | 12.72 | 70300 | 0.1695 | 1.0198 |
| 0.0512 | 12.74 | 70400 | 0.1666 | 1.0193 |
| 0.0484 | 12.75 | 70500 | 0.1654 | 1.0142 |
| 0.0484 | 12.77 | 70600 | 0.1598 | 1.0154 |
| 0.0484 | 12.79 | 70700 | 0.1623 | 1.0139 |
| 0.0484 | 12.81 | 70800 | 0.1662 | 1.0180 |
| 0.0484 | 12.83 | 70900 | 0.1659 | 1.0232 |
| 0.0501 | 12.84 | 71000 | 0.1662 | 1.0202 |
| 0.0501 | 12.86 | 71100 | 0.1639 | 1.0161 |
| 0.0501 | 12.88 | 71200 | 0.1666 | 1.0151 |
| 0.0501 | 12.9 | 71300 | 0.1644 | 1.0129 |
| 0.0501 | 12.92 | 71400 | 0.1642 | 1.0171 |
| 0.0482 | 12.93 | 71500 | 0.1635 | 1.0162 |
| 0.0482 | 12.95 | 71600 | 0.1637 | 1.0186 |
| 0.0482 | 12.97 | 71700 | 0.1639 | 1.0142 |
| 0.0482 | 12.99 | 71800 | 0.1643 | 1.0122 |
| 0.0482 | 13.01 | 71900 | 0.1679 | 1.0156 |
| 0.0483 | 13.02 | 72000 | 0.1717 | 1.0224 |
| 0.0483 | 13.04 | 72100 | 0.1742 | 1.0229 |
| 0.0483 | 13.06 | 72200 | 0.1718 | 1.0237 |
| 0.0483 | 13.08 | 72300 | 0.1742 | 1.0266 |
| 0.0483 | 13.1 | 72400 | 0.1736 | 1.0257 |
| 0.0443 | 13.12 | 72500 | 0.1741 | 1.0275 |
| 0.0443 | 13.13 | 72600 | 0.1745 | 1.0325 |
| 0.0443 | 13.15 | 72700 | 0.1737 | 1.0296 |
| 0.0443 | 13.17 | 72800 | 0.1722 | 1.0303 |
| 0.0443 | 13.19 | 72900 | 0.1702 | 1.0305 |
| 0.0424 | 13.21 | 73000 | 0.1733 | 1.0241 |
| 0.0424 | 13.22 | 73100 | 0.1748 | 1.0243 |
| 0.0424 | 13.24 | 73200 | 0.1760 | 1.0231 |
| 0.0424 | 13.26 | 73300 | 0.1745 | 1.0241 |
| 0.0424 | 13.28 | 73400 | 0.1772 | 1.0217 |
| 0.0424 | 13.3 | 73500 | 0.1755 | 1.0206 |
| 0.0424 | 13.31 | 73600 | 0.1743 | 1.0242 |
| 0.0424 | 13.33 | 73700 | 0.1738 | 1.0208 |
| 0.0424 | 13.35 | 73800 | 0.1736 | 1.0249 |
| 0.0424 | 13.37 | 73900 | 0.1747 | 1.0271 |
| 0.0437 | 13.39 | 74000 | 0.1707 | 1.0241 |
| 0.0437 | 13.4 | 74100 | 0.1731 | 1.0269 |
| 0.0437 | 13.42 | 74200 | 0.1743 | 1.0290 |
| 0.0437 | 13.44 | 74300 | 0.1739 | 1.0266 |
| 0.0437 | 13.46 | 74400 | 0.1763 | 1.0246 |
| 0.0443 | 13.48 | 74500 | 0.1724 | 1.0209 |
| 0.0443 | 13.49 | 74600 | 0.1744 | 1.0244 |
| 0.0443 | 13.51 | 74700 | 0.1717 | 1.0232 |
| 0.0443 | 13.53 | 74800 | 0.1754 | 1.0217 |
| 0.0443 | 13.55 | 74900 | 0.1721 | 1.0234 |
| 0.0435 | 13.57 | 75000 | 0.1751 | 1.0197 |
| 0.0435 | 13.59 | 75100 | 0.1727 | 1.0285 |
| 0.0435 | 13.6 | 75200 | 0.1715 | 1.0221 |
| 0.0435 | 13.62 | 75300 | 0.1746 | 1.0247 |
| 0.0435 | 13.64 | 75400 | 0.1712 | 1.0231 |
| 0.0436 | 13.66 | 75500 | 0.1719 | 1.0228 |
| 0.0436 | 13.68 | 75600 | 0.1727 | 1.0197 |
| 0.0436 | 13.69 | 75700 | 0.1750 | 1.0252 |
| 0.0436 | 13.71 | 75800 | 0.1702 | 1.0241 |
| 0.0436 | 13.73 | 75900 | 0.1720 | 1.0250 |
| 0.0433 | 13.75 | 76000 | 0.1744 | 1.0210 |
| 0.0433 | 13.77 | 76100 | 0.1735 | 1.0211 |
| 0.0433 | 13.78 | 76200 | 0.1727 | 1.0205 |
| 0.0433 | 13.8 | 76300 | 0.1706 | 1.0218 |
| 0.0433 | 13.82 | 76400 | 0.1709 | 1.0238 |
| 0.0431 | 13.84 | 76500 | 0.1705 | 1.0197 |
| 0.0431 | 13.86 | 76600 | 0.1734 | 1.0223 |
| 0.0431 | 13.87 | 76700 | 0.1695 | 1.0250 |
| 0.0431 | 13.89 | 76800 | 0.1734 | 1.0232 |
| 0.0431 | 13.91 | 76900 | 0.1724 | 1.0219 |
| 0.041 | 13.93 | 77000 | 0.1706 | 1.0236 |
| 0.041 | 13.95 | 77100 | 0.1689 | 1.0220 |
| 0.041 | 13.97 | 77200 | 0.1738 | 1.0230 |
| 0.041 | 13.98 | 77300 | 0.1727 | 1.0254 |
| 0.041 | 14.0 | 77400 | 0.1721 | 1.0261 |
| 0.041 | 14.02 | 77500 | 0.1760 | 1.0261 |
| 0.041 | 14.04 | 77600 | 0.1772 | 1.0202 |
| 0.041 | 14.06 | 77700 | 0.1782 | 1.0202 |
| 0.041 | 14.07 | 77800 | 0.1777 | 1.0222 |
| 0.041 | 14.09 | 77900 | 0.1787 | 1.0203 |
| 0.0383 | 14.11 | 78000 | 0.1790 | 1.0236 |
| 0.0383 | 14.13 | 78100 | 0.1812 | 1.0245 |
| 0.0383 | 14.15 | 78200 | 0.1778 | 1.0224 |
| 0.0383 | 14.16 | 78300 | 0.1771 | 1.0231 |
| 0.0383 | 14.18 | 78400 | 0.1782 | 1.0242 |
| 0.0391 | 14.2 | 78500 | 0.1785 | 1.0262 |
| 0.0391 | 14.22 | 78600 | 0.1791 | 1.0261 |
| 0.0391 | 14.24 | 78700 | 0.1770 | 1.0254 |
| 0.0391 | 14.25 | 78800 | 0.1810 | 1.0257 |
| 0.0391 | 14.27 | 78900 | 0.1794 | 1.0241 |
| 0.0387 | 14.29 | 79000 | 0.1774 | 1.0256 |
| 0.0387 | 14.31 | 79100 | 0.1774 | 1.0236 |
| 0.0387 | 14.33 | 79200 | 0.1759 | 1.0222 |
| 0.0387 | 14.35 | 79300 | 0.1787 | 1.0237 |
| 0.0387 | 14.36 | 79400 | 0.1788 | 1.0227 |
| 0.0372 | 14.38 | 79500 | 0.1789 | 1.0232 |
| 0.0372 | 14.4 | 79600 | 0.1771 | 1.0254 |
| 0.0372 | 14.42 | 79700 | 0.1777 | 1.0244 |
| 0.0372 | 14.44 | 79800 | 0.1791 | 1.0225 |
| 0.0372 | 14.45 | 79900 | 0.1786 | 1.0237 |
| 0.0385 | 14.47 | 80000 | 0.1782 | 1.0243 |
| 0.0385 | 14.49 | 80100 | 0.1770 | 1.0236 |
| 0.0385 | 14.51 | 80200 | 0.1782 | 1.0240 |
| 0.0385 | 14.53 | 80300 | 0.1764 | 1.0243 |
| 0.0385 | 14.54 | 80400 | 0.1748 | 1.0248 |
| 0.039 | 14.56 | 80500 | 0.1758 | 1.0232 |
| 0.039 | 14.58 | 80600 | 0.1763 | 1.0246 |
| 0.039 | 14.6 | 80700 | 0.1770 | 1.0220 |
| 0.039 | 14.62 | 80800 | 0.1788 | 1.0225 |
| 0.039 | 14.63 | 80900 | 0.1781 | 1.0230 |
| 0.039 | 14.65 | 81000 | 0.1779 | 1.0230 |
| 0.039 | 14.67 | 81100 | 0.1755 | 1.0212 |
| 0.039 | 14.69 | 81200 | 0.1765 | 1.0226 |
| 0.039 | 14.71 | 81300 | 0.1787 | 1.0241 |
| 0.039 | 14.72 | 81400 | 0.1782 | 1.0250 |
| 0.0368 | 14.74 | 81500 | 0.1780 | 1.0248 |
| 0.0368 | 14.76 | 81600 | 0.1782 | 1.0242 |
| 0.0368 | 14.78 | 81700 | 0.1782 | 1.0242 |
| 0.0368 | 14.8 | 81800 | 0.1792 | 1.0241 |
| 0.0368 | 14.82 | 81900 | 0.1796 | 1.0238 |
| 0.0378 | 14.83 | 82000 | 0.1795 | 1.0236 |
| 0.0378 | 14.85 | 82100 | 0.1796 | 1.0239 |
| 0.0378 | 14.87 | 82200 | 0.1792 | 1.0236 |
| 0.0378 | 14.89 | 82300 | 0.1789 | 1.0239 |
| 0.0378 | 14.91 | 82400 | 0.1788 | 1.0238 |
| 0.0386 | 14.92 | 82500 | 0.1787 | 1.0239 |
| 0.0386 | 14.94 | 82600 | 0.1786 | 1.0236 |
| 0.0386 | 14.96 | 82700 | 0.1786 | 1.0237 |
| 0.0386 | 14.98 | 82800 | 0.1787 | 1.0239 |
| 0.0386 | 15.0 | 82900 | 0.1788 | 1.0238 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
gokulkarthik/xlm-roberta-qa-chaii | 02f9edd5440b984f92764c4fadadab75079be001 | 2021-12-06T15:50:08.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"en",
"ta",
"hi",
"dataset:squad",
"dataset:chaii",
"transformers",
"autotrain_compatible"
] | question-answering | false | gokulkarthik | null | gokulkarthik/xlm-roberta-qa-chaii | 21 | null | transformers | 8,170 | ---
language:
- en
- ta
- hi
datasets:
- squad
- chaii
widget:
- text: "அலுமினியத்தின் அணு எண் என்ன?"
context: "அலுமினியம் (ஆங்கிலம்: அலுமினியம்; வட அமெரிக்க ஆங்கிலம்: Aluminum) ஒரு வேதியியல் தனிமம் ஆகும். இதனுடைய அணு எண் 13 ஆகும். இது பூமியில் அதிகம் கிடைக்கும் உலோகங்களுள் ஒன்று. இது மின்சாரத்தையும் வெப்பத்தையும் கடத்த வல்லது. பாக்ஸைட் என்ற தாதுவில் இருந்து அலுமினியம் தயாரிக்கப்படுகிறது. இதன் வேதிக்குறியீடு Al ஆகும்."
- text: "ज्वाला गुट्टा की माँ का नाम क्या है?"
context: "ज्वाला गुट्टा (जन्म: 7 सितंबर 1983; वर्धा, महाराष्ट्र) एक भारतीय बैडमिंटन खिलाडी हैं। प्रारंभिक जीवन ज्वाला गुट्टा का जन्म 7 सितंबर 1983 को वर्धा, महाराष्ट्र में हुआ था। उनके पिता एम. क्रांति तेलुगु और मां येलन चीन से हैं। उनकी मां येलन गुट्टा पहली बार 1977 में अपने दादा जी के साथ भारत आई थीं। ज्वाला गुट्टा की प्रारंभिक पढ़ाई हैदराबाद से हुई और यहीं से उन्होंने बैडमिंटन खेलना भी शुरू किया। कॅरियर 10 साल की उम्र से ही ज्वाला गुट्टा ने एस.एम. आरिफ से ट्रेनिंग लेना शुरू कर दिया था। एस.एम. आरिफ भारत के जाने माने खेल प्रशिक्षक हैं जिन्हें द्रोणाचार्य अवार्ड से सम्मानित किया गया है। पहली बार 13 साल की उम्र में उन्होंने मिनी नेशनल बैडमिंटन चैंपियनशिप जीती थी। साल 2000 में ज्वाला गुट्टा ने 17 साल की उम्र में जूनियर नेशनल बैडमिंटन चैंपियनशिप जीती। इसी साल उन्होंने श्रुति कुरियन के साथ डबल्स में जोड़ी बनाते हुए महिलाओं के डबल्स जूनियर नेशनल बैडमिंटन चैंपियनशिप और सीनियर नेशनल बैडमिंटन चैंपियनशिप में जीत हासिल की। श्रुति कुरियन के साथ उनकी जोड़ी काफी लंबे समय तक चली। 2002 से 2008 तक लगातार सात बार ज्वाला गुट्टा ने महिलाओं के नेशनल युगल प्रतियोगिता में जीत हासिल की।"
- text: "How many bones do you have in your body?"
context: "A normal adult human skeleton consists of the following 206 (208 if the breast is thought to be three parts). This number can vary depending on the physiological differences. For example, in a very small number of humans, an extra rib (neck) or an extra lower spinal cord is found. There are 22 bones in the human skull (excluding the ear tendons), which are divided into eight cranium bones and 14 facial bones. (Thick numbers indicate the numbers seen in the nearby picture.) Bones (8) 1 frontal bone (2) 3 temporal bone (2) 4 occipital bone (4) Sphinoid bone (14) 7 mandible (6) maxilla (2) palatine bone (2) 5 zygotic bone (9) 9 nasal bone (2) The sacral vertebrae (4 or 5), in adults, form the sacral vertebrae (3 to 5), in adults they form the valve."
---
# XLM-RoBERTa for question answering in Indian languages
pre-trained XLM-Roberta with intermediate pre-training on SQUAD dataset (English) and fine tuning on Chaii dataset (Tamil, Hindi)
# How to use from the 🤗/transformers library
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("gokulkarthik/xlm-roberta-qa-chaii")
model = AutoModelForQuestionAnswering.from_pretrained("gokulkarthik/xlm-roberta-qa-chaii")
``` |
google/t5-efficient-base-nl40 | f3d787d3e0e8156d17f6f2b437fb14631c8abbd8 | 2022-02-15T10:53:33.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-efficient-base-nl40 | 21 | null | transformers | 8,171 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-BASE-NL40 (Deep-Narrow version)
T5-Efficient-BASE-NL40 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl40** - is of model type **Base** with the following variations:
- **nl** is **40**
It has **685.53** million parameters and thus requires *ca.* **2742.11 MB** of memory in full precision (*fp32*)
or **1371.05 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
huggingtweets/4by3animetits | 09ba7bd133af922a75414d546b5498ad10218abe | 2021-09-14T06:15:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/4by3animetits | 21 | null | transformers | 8,172 | ---
language: en
thumbnail: https://www.huggingtweets.com/4by3animetits/1631600106043/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1437436917201637376/YMXf838Y_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Numb</div>
<div style="text-align: center; font-size: 14px;">@4by3animetits</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Numb.
| Data | Numb |
| --- | --- |
| Tweets downloaded | 3206 |
| Retweets | 1497 |
| Short tweets | 491 |
| Tweets kept | 1218 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pdw5mgr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @4by3animetits's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5yrdnbzr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5yrdnbzr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/4by3animetits')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/molleindustria | 52a1a47c3167a2e2b5d4af6428c7e128fb7312e7 | 2021-05-22T15:04:01.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/molleindustria | 21 | null | transformers | 8,173 | ---
language: en
thumbnail: https://www.huggingtweets.com/molleindustria/1607297976960/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1093212724/logo_small_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Paolo Pedercini 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@molleindustria bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@molleindustria's tweets](https://twitter.com/molleindustria).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3240</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>376</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>172</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2692</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/r51uy9bs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @molleindustria's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1cdzfc0q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1cdzfc0q/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/molleindustria'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/porngum_ebooks | e2db877750ef12891172d68191b803a3050083aa | 2021-05-22T19:07:00.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/porngum_ebooks | 21 | null | transformers | 8,174 | ---
language: en
thumbnail: https://www.huggingtweets.com/porngum_ebooks/1621363486627/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1383374684071227395/e9hDXrVN_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Envelope</div>
<div style="text-align: center; font-size: 14px;">@porngum_ebooks</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Envelope.
| Data | Envelope |
| --- | --- |
| Tweets downloaded | 3173 |
| Retweets | 817 |
| Short tweets | 725 |
| Tweets kept | 1631 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2cyxpt28/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @porngum_ebooks's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vi26h00l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vi26h00l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/porngum_ebooks')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hyunwoongko/megatron-11B | 587257c53d3f43ca2ec213451f4d4c17a8c3e2ed | 2021-06-22T18:21:05.000Z | [
"pytorch",
"megatron",
"text-generation",
"transformers"
] | text-generation | false | hyunwoongko | null | hyunwoongko/megatron-11B | 21 | 2 | transformers | 8,175 | Entry not found |
idjotherwise/autonlp-reading_prediction-172506 | 6dd4934e8fe44bad70006d590cfb855b7984a23e | 2021-05-20T16:57:07.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:idjotherwise/autonlp-data-reading_prediction",
"transformers",
"autonlp"
] | text-classification | false | idjotherwise | null | idjotherwise/autonlp-reading_prediction-172506 | 21 | null | transformers | 8,176 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- idjotherwise/autonlp-data-reading_prediction
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 172506
## Validation Metrics
- Loss: 0.03257797285914421
- MSE: 0.03257797285914421
- MAE: 0.14246532320976257
- R2: 0.9693824457290849
- RMSE: 0.18049369752407074
- Explained Variance: 0.9699198007583618
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/idjotherwise/autonlp-reading_prediction-172506
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("idjotherwise/autonlp-reading_prediction-172506")
tokenizer = AutoTokenizer.from_pretrained("idjotherwise/autonlp-reading_prediction-172506")
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
infinitejoy/wav2vec2-large-xls-r-300m-hindi | 67d68e320645ef250a94d97eea9c620ecc9cdf9e | 2022-03-23T18:34:51.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-hindi | 21 | null | transformers | 8,177 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- hi
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Hindi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 100
- name: Test CER
type: cer
value: 92.98
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5414
- Wer: 1.0194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.6095 | 3.38 | 500 | 4.5881 | 0.9999 |
| 3.3396 | 6.76 | 1000 | 3.3301 | 1.0001 |
| 2.0061 | 10.14 | 1500 | 1.2096 | 1.0063 |
| 1.523 | 13.51 | 2000 | 0.7836 | 1.0051 |
| 1.3868 | 16.89 | 2500 | 0.6837 | 1.0080 |
| 1.2807 | 20.27 | 3000 | 0.6568 | 1.0112 |
| 1.231 | 23.65 | 3500 | 0.6120 | 1.0105 |
| 1.1673 | 27.03 | 4000 | 0.5972 | 1.0089 |
| 1.1416 | 30.41 | 4500 | 0.5780 | 1.0132 |
| 1.0738 | 33.78 | 5000 | 0.5806 | 1.0123 |
| 1.0771 | 37.16 | 5500 | 0.5586 | 1.0067 |
| 1.0287 | 40.54 | 6000 | 0.5464 | 1.0058 |
| 1.0106 | 43.92 | 6500 | 0.5407 | 1.0062 |
| 0.9538 | 47.3 | 7000 | 0.5334 | 1.0089 |
| 0.9607 | 50.68 | 7500 | 0.5395 | 1.0110 |
| 0.9108 | 54.05 | 8000 | 0.5502 | 1.0137 |
| 0.9252 | 57.43 | 8500 | 0.5498 | 1.0062 |
| 0.8943 | 60.81 | 9000 | 0.5448 | 1.0158 |
| 0.8728 | 64.19 | 9500 | 0.5257 | 1.0113 |
| 0.8577 | 67.57 | 10000 | 0.5550 | 1.0178 |
| 0.8332 | 70.95 | 10500 | 0.5607 | 1.0166 |
| 0.8174 | 74.32 | 11000 | 0.5429 | 1.0145 |
| 0.8168 | 77.7 | 11500 | 0.5561 | 1.0116 |
| 0.7872 | 81.08 | 12000 | 0.5478 | 1.0164 |
| 0.7707 | 84.46 | 12500 | 0.5412 | 1.0216 |
| 0.7742 | 87.84 | 13000 | 0.5391 | 1.0207 |
| 0.7594 | 91.22 | 13500 | 0.5379 | 1.0208 |
| 0.7678 | 94.59 | 14000 | 0.5415 | 1.0198 |
| 0.7502 | 97.97 | 14500 | 0.5409 | 1.0191 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
ismaelardo/BETO_4d | d2114ea296185c262ca9c5c3f305316eb910271a | 2021-12-30T23:53:21.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ismaelardo | null | ismaelardo/BETO_4d | 21 | null | transformers | 8,178 | Entry not found |
it5/it5-small-headline-generation | f985f0d04fe60572ac4df4aeca2d32133565489e | 2022-03-09T08:00:22.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:gsarti/change_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"newspaper",
"ilgiornale",
"repubblica",
"headline-generation",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | it5 | null | it5/it5-small-headline-generation | 21 | null | transformers | 8,179 | ---
language:
- it
license: apache-2.0
datasets:
- gsarti/change_it
tags:
- italian
- sequence-to-sequence
- newspaper
- ilgiornale
- repubblica
- headline-generation
widget:
- text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."
- text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."
- text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."
- text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."
metrics:
- rouge
- bertscore
model-index:
- name: it5-small-headline-generation
results:
- task:
type: headline-generation
name: "Headline generation"
dataset:
type: headgen_it
name: "HeadGen-IT"
metrics:
- type: rouge1
value: 0.287
name: "Test Rouge1"
- type: rouge2
value: 0.100
name: "Test Rouge2"
- type: rougeL
value: 0.253
name: "Test RougeL"
- type: bertscore
value: 0.414
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "8g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Small for News Headline Generation 📣 🇮🇹
This repository contains the checkpoint for the [IT5 Small](https://huggingface.co/gsarti/it5-small) model fine-tuned on news headline generation on the Italian HeadGen-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
hg = pipeline("text2text-generation", model='it5/it5-small-headline-generation')
hg("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-small-headline-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-small-headline-generation")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
jkgrad/xlnet-base-cased-squad-quoref | 1ab9e6595274eda0ab1960db6b0ac95b7fb3cb25 | 2021-01-28T06:54:08.000Z | [
"pytorch",
"xlnet",
"question-answering",
"arxiv:1906.08237",
"transformers",
"autotrain_compatible"
] | question-answering | false | jkgrad | null | jkgrad/xlnet-base-cased-squad-quoref | 21 | null | transformers | 8,180 | # XLNet Fine-tuned on SQuAD / Quoref Dataset
[XLNet](https://arxiv.org/abs/1906.08237) jointly developed by Google and CMU and fine-tuned on [SQuAD / SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) and [Quoref](https://leaderboard.allenai.org/quoref) for question answering down-stream task.
## Evaluation Result on Quoref
```
{
"exact_match": 73.65591397848462,
"f1": 77.9981532789881
}
```
## Results Comparison on Quoref
| Metric | XLNet Base Line | Model FT on SQuAD |
| ------ | --------- | --------- |
| **EM** | **61.88** | **73.66** (+11.78) |
| **F1** | **70.51** | **78.00** (+7.49)|
## How to Use
```
from transformers import XLNetForQuestionAnswering, XLNetTokenizerFast
model = XLNetForQuestionAnswering.from_pretrained('jkgrad/xlnet-base-cased-squad-quoref)
tokenizer = XLNetTokenizerFast.from_pretrained('jkgrad/xlnet-base-cased-squad-quoref')
``` |
junnyu/roformer_chinese_char_small | 9bfe6ff7c9e88946e660b3444d25674047409eb3 | 2022-01-04T11:45:10.000Z | [
"pytorch",
"tf",
"jax",
"roformer",
"fill-mask",
"zh",
"arxiv:2104.09864",
"transformers",
"tf2.0",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/roformer_chinese_char_small | 21 | null | transformers | 8,181 | ---
language: zh
tags:
- roformer
- pytorch
- tf2.0
widget:
- text: "今天[MASK]很好,我想去公园玩!"
---
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, RoFormerTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_small")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_small")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[也||都||又||还||我]很好,我[就||想||去||也||又]去公园玩。
```
## tensorflow2.0使用
```python
import tensorflow as tf
from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_small")
tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_small")
tf_inputs = tokenizer(text, return_tensors="tf")
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf2.0: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(
tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(tf_outputs_sentence)
# tf2.0: 今天[也||都||又||还||我]很好,我[就||想||去||也||又]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
liam168/gen-gpt2-medium-chinese | efb34b2f0b82adfe57a9c3f11be066a2a6afc620 | 2021-07-07T02:26:55.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"zh",
"transformers"
] | text-generation | false | liam168 | null | liam168/gen-gpt2-medium-chinese | 21 | null | transformers | 8,182 | ---
language: zh
widget:
- text: "晓日千红"
- text: "长街躞蹀"
---
# gen-gpt2-medium-chinese
# Overview
- **Language model**: GPT2-Medium
- **Model size**: 68M
- **Language**: Chinese
# Example
```python
from transformers import TFGPT2LMHeadModel,AutoTokenizer
from transformers import TextGenerationPipeline
mode_name = 'liam168/gen-gpt2-medium-chinese'
tokenizer = AutoTokenizer.from_pretrained(mode_name)
model = TFGPT2LMHeadModel.from_pretrained(mode_name)
text_generator = TextGenerationPipeline(model, tokenizer)
print(text_generator("晓日千红", max_length=64, do_sample=True))
print(text_generator("加餐小语", max_length=50, do_sample=False))
```
输出
```text
[{'generated_text': '晓日千红 独 远 客 。 孤 夜 云 云 梦 到 冷 。 著 剩 笑 、 人 远 。 灯 啼 鸦 最 回 吟 。 望 , 枕 付 孤 灯 、 客 。 对 梅 残 照 偏 相 思 , 玉 弦 语 。 翠 台 新 妆 、 沉 、 登 临 水 。 空'}]
[{'generated_text': '加餐小语 有 有 骨 , 有 人 诗 成 自 远 诗 。 死 了 自 喜 乐 , 独 撑 天 下 诗 事 小 诗 柴 。 桃 花 谁 知 何 处 何 处 高 吟 诗 从 今 死 火 , 此 事'}]
```
|
liam168/qa-roberta-base-chinese-extractive | 4d2f870d15305bbf09588dc42f2dd845157e51e2 | 2021-07-16T05:01:19.000Z | [
"pytorch",
"bert",
"question-answering",
"zh",
"transformers",
"autotrain_compatible"
] | question-answering | false | liam168 | null | liam168/qa-roberta-base-chinese-extractive | 21 | 2 | transformers | 8,183 | ---
language: zh
widget:
- text: "著名诗歌《假如生活欺骗了你》的作者是"
context: "普希金从那里学习人民的语言,吸取了许多有益的养料,这一切对普希金后来的创作产生了很大的影响。这两年里,普希金创作了不少优秀的作品,如《囚徒》、《致大海》、《致凯恩》和《假如生活欺骗了你》等几十首抒情诗,叙事诗《努林伯爵》,历史剧《鲍里斯·戈都诺夫》,以及《叶甫盖尼·奥涅金》前六章。"
---
# Chinese RoBERTa-Base Model for QA
## Model description
用中文预料微调的QA模型.
## Overview
- **Language model**: RoBERTa-Base
- **Model size**: 400M
- **Language**: Chinese
## How to use
You can use the model directly with a pipeline for extractive question answering:
```python
>>> from transformers import AutoModelForQuestionAnswering,AutoTokenizer,pipeline
>>> context = '卡利亚·基拔(,)生于英国汉默史密斯,是一名英格兰籍职业足球员,于2010年夏季约满离开母会阿仙奴。直到2005/06年,基拔通常在阿仙奴的青年后备队效力。他在首次在2005年11月29日的联赛杯赛事上场,并于12月7日,在一个欧洲联赛冠军杯比赛对阿积士,作为替代左后卫,入替受伤的劳伦。2006年7月21日阿仙奴宣布,将基拔出借卡迪夫城整个2006-07赛季,其后转借给修安联。2008年1月3日返回阿仙奴授予46号码。2008年2月11日,阿仙奴的英超联赛比赛中对布莱克本作为后备球员。但2008年7月10日,基拔被出借莱斯特城的一个赛季之久。2009年3月3日主场对-{zh-hans:斯托克港;zh-hk:史托港}-,开赛后仅两分钟,基拔的传中球「挞Q」却直入网角,是他个人首个入球。基拔在外借期间成为常规正选,整季上阵达39场及射入1球,协助莱斯特城赢取英甲联赛冠军及重返英冠。2009/10年上半季仅于两场英格兰联赛杯及一场无关痛痒的欧联分组赛上阵,将于季后约满的基拔获外借到英冠榜末球会彼德堡直到球季结束,期间上阵10场。2010年夏季基拔约满阿仙奴成为自由球员,仅为母会合共上阵10场,英超「升班马」黑池有意罗致,其后前往-{zh-hans:谢菲尔德联; zh-hk:锡菲联;}-参加试训,惟未有获得录用。'
>>> mode_name = 'liam168/qa-roberta-base-chinese-extractive'
>>> model = AutoModelForQuestionAnswering.from_pretrained(mode_name)
>>> tokenizer = AutoTokenizer.from_pretrained(mode_name)
>>> QA = pipeline('question-answering', model=model, tokenizer=tokenizer)
>>> QA_input = {'question': "卡利亚·基拔的职业是什么?",'context': context}
>>> QA(QA_input)
{'score': 0.9999, 'start': 20, 'end': 31, 'answer': '一名英格兰籍职业足球员'}
```
## Contact
[email protected]
|
liangtaiwan/t5-v1_1-lm100k-base | ff02d26d22780e2a4e42b96965d2c7f5fa90e9e5 | 2021-10-21T09:30:59.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | liangtaiwan | null | liangtaiwan/t5-v1_1-lm100k-base | 21 | null | transformers | 8,184 | Entry not found |
madlag/bert-base-uncased-squad1.1-block-sparse-0.13-v1 | 9457400a20c3a0bdc0711ed11f3339b30d7b31aa | 2021-05-19T22:32:43.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"en",
"dataset:squad",
"arxiv:2005.07683",
"transformers",
"bert-base",
"license:mit",
"autotrain_compatible"
] | question-answering | false | madlag | null | madlag/bert-base-uncased-squad1.1-block-sparse-0.13-v1 | 21 | null | transformers | 8,185 | ---
language: en
thumbnail:
license: mit
tags:
- question-answering
- bert
- bert-base
datasets:
- squad
metrics:
- squad
widget:
- text: "Where is the Eiffel Tower located?"
context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower."
- text: "Who is Frederic Chopin?"
context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano."
---
## BERT-base uncased model fine-tuned on SQuAD v1
This model is block sparse: the **linear** layers contains **12.5%** of the original weights.
The model contains **32.1%** of the original weights **overall**.
The training use a modified version of Victor Sanh [Movement Pruning](https://arxiv.org/abs/2005.07683) method.
That means that with the [block-sparse](https://github.com/huggingface/pytorch_block_sparse) runtime it ran **1.65x** faster than an dense networks on the evaluation, at the price of some impact on the accuracy (see below).
This model was fine-tuned from the HuggingFace [BERT](https://www.aclweb.org/anthology/N19-1423/) base uncased checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the equivalent model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1).
This model is case-insensitive: it does not make a difference between english and English.
## Pruning details
A side-effect of the block pruning is that some of the attention heads are completely removed: 97 heads were removed on a total of 144 (67.4%).
Here is a detailed view on how the remaining heads are distributed in the network after pruning.

## Density plot
<script src="/madlag/bert-base-uncased-squad1.1-block-sparse-0.13-v1/raw/main/model_card/density.js" id="34ede51e-2375-4d96-99dd-383de82a2d16"></script>
## Details
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD1.1 | train | 90.6K |
| SQuAD1.1 | eval | 11.1k |
### Fine-tuning
- Python: `3.8.5`
- Machine specs:
```CPU: Intel(R) Core(TM) i7-6700K CPU
Memory: 64 GiB
GPUs: 1 GeForce GTX 3090, with 24GiB memory
GPU driver: 455.23.05, CUDA: 11.1
```
### Results
**Pytorch model file size**: `342M` (original BERT: `438M`)
| Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))|
| ------ | --------- | --------- |
| **EM** | **74.39** | **80.8** |
| **F1** | **83.26** | **88.5** |
## Example Usage
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="madlag/bert-base-uncased-squad1.1-block-sparse-0.13-v1",
tokenizer="madlag/bert-base-uncased-squad1.1-block-sparse-0.13-v1"
)
predictions = qa_pipeline({
'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.",
'question': "Who is Frederic Chopin?",
})
print(predictions)
``` |
monologg/koelectra-small-finetuned-goemotions | 761a00c48d933899f3d70a71ba131cbcaca5145e | 2020-05-18T21:39:13.000Z | [
"pytorch",
"electra",
"transformers"
] | null | false | monologg | null | monologg/koelectra-small-finetuned-goemotions | 21 | null | transformers | 8,186 | Entry not found |
mrm8488/CodeGPT-small-finetuned-python-token-completion | 06b027cb8ff99bc236e608c7e3a73f855c99ccf6 | 2021-05-23T10:08:40.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers"
] | text-generation | false | mrm8488 | null | mrm8488/CodeGPT-small-finetuned-python-token-completion | 21 | 1 | transformers | 8,187 |
---
language: en
widget:
- text: "<s> def add_number ( a , b ) : <EOL> return a +"
---
# CodeGPT-small-py fine-tuned on CodeXGLUE for code-refinement task |
persiannlp/mt5-large-parsinlu-snli-entailment | 29df81b8dc19909cb5060518d726b0da287caedf | 2021-09-23T16:20:24.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:snli",
"transformers",
"entailment",
"mt5",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-large-parsinlu-snli-entailment | 21 | null | transformers | 8,188 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- entailment
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- snli
metrics:
- accuracy
---
# Textual Entailment (مدل برای پاسخ به استلزام منطقی)
This is a model for textual entailment problems.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size="large"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-snli-entailment"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(premise, hypothesis, **generator_args):
input_ids = tokenizer.encode(f"{premise}<sep>{hypothesis}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.",
"در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد."
)
run_model(
"آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟",
"هیچ کودکی هرگز نمی خواهد سرگرم شود.",
)
run_model(
"ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم",
"علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم."
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
razent/SciFive-large-PMC | 742f5f056b465b331b6efabaf199cf68534296cc | 2022-03-20T17:45:54.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:pmc/open_access",
"arxiv:2106.03598",
"transformers",
"token-classification",
"text-classification",
"question-answering",
"text-generation",
"autotrain_compatible"
] | text-classification | false | razent | null | razent/SciFive-large-PMC | 21 | 1 | transformers | 8,189 | ---
language:
- en
tags:
- token-classification
- text-classification
- question-answering
- text2text-generation
- text-generation
datasets:
- pmc/open_access
---
# SciFive PMC Large
## Introduction
Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598)
Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_
## How to use
For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-large-PMC")
model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-large-PMC")
sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ."
text = "ncbi_ner: " + sentence + " </s>"
encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
early_stopping=True
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(line)
``` |
readerbench/jurBERT-large | af9617d9c39dc5807704062b2f47d3b734d25d98 | 2021-11-19T11:55:47.000Z | [
"pytorch",
"tf",
"bert",
"ro",
"transformers"
] | null | false | readerbench | null | readerbench/jurBERT-large | 21 | null | transformers | 8,190 | Model card for jurBERT-large
---
language:
- ro
---
# jurBERT-large
## Pretrained juridical BERT model for Romanian
BERT Romanian juridical model trained using a masked language modeling (MLM) and next sentence prediction (NSP) objective.
It was introduced in this [paper](https://aclanthology.org/2021.nllp-1.8/). Two BERT models were released: **jurBERT-base** and **jurBERT-large**, all versions uncased.
| Model | Weights | L | H | A | MLM accuracy | NSP accuracy |
|----------------|:---------:|:------:|:------:|:------:|:--------------:|:--------------:|
| jurBERT-base | 111M | 12 | 768 | 12 | 0.8936 | 0.9923 |
| *jurBERT-large* | *337M* | *24* | *1024* | *24* | *0.9005* | *0.9929* |
All models are available:
* [jurBERT-base](https://huggingface.co/readerbench/jurBERT-base)
* [jurBERT-large](https://huggingface.co/readerbench/jurBERT-large)
#### How to use
```python
# tensorflow
from transformers import AutoModel, AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/jurBERT-large")
model = TFAutoModel.from_pretrained("readerbench/jurBERT-large")
inputs = tokenizer("exemplu de propoziție", return_tensors="tf")
outputs = model(inputs)
# pytorch
from transformers import AutoModel, AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/jurBERT-large")
model = AutoModel.from_pretrained("readerbench/jurBERT-large")
inputs = tokenizer("exemplu de propoziție", return_tensors="pt")
outputs = model(**inputs)
```
## Datasets
The model is trained on a private corpus (that can nevertheless be rented for a fee), that is comprised of all the final ruling, containing both civil and criminal cases, published by any Romanian civil court between 2010 and 2018. Validation is performed on RoBanking datase. We extracted from RoJur common types of cases pertinent to the banking domain (e.g. administration fee litigations, enforcement appeals), kept only the summary of the arguments provided by both the plaitiffs and the defendants and the final verdict (in the form of a boolean value) to build RoBanking.
| Corpus | Scope |Entries | Size (GB)|
|-----------|:------------:|:---------:|:---------:|
| RoJur | pre-training | 11M | 160 |
| RoBanking | downstream | 108k | - |
## Downstream performance
We report Mean AUC and Std AUC on the task of predicting the outcome of a case.
### Results on RoBanking using only the plea of the plaintiff.
| Model | Mean AUC | Std AUC |
|--------------------|:--------:|:--------:|
| CNN | 79.60 | - |
| BI-LSTM | 80.99 | 0.26 |
| RoBERT-small | 70.54 | 0.28 |
| RoBERT-base | 79.74 | 0.21 |
| RoBERT-base + hf | 79.82 | 0.11 |
| RoBERT-large | 76.53 | 5.43 |
| jurBERT-base | **81.47**| **0.18** |
| jurBERT-base + hf | 81.40 | 0.18 |
| *jurBERT-large* | *78.38* | *1.77* |
### Results on RoBanking using pleas from both the plaintiff and defendant.
| Model | Mean AUC | Std AUC |
|---------------------|:--------:|:--------:|
| BI-LSTM | 84.60 | 0.59 |
| RoBERT-base | 84.40 | 0.26 |
| RoBERT-base + hf | 84.43 | 0.15 |
| jurBERT-base | 86.63 | 0.18 |
| jurBERT-base + hf | **86.73**| **0.22** |
| *jurBERT-large* | *82.04* | *0.64* |
For complete results and discussion please refer to the [paper](https://aclanthology.org/2021.nllp-1.8/).
### BibTeX entry and citation info
```bibtex
@inproceedings{masala2021jurbert,
title={jurBERT: A Romanian BERT Model for Legal Judgement Prediction},
author={Masala, Mihai and Iacob, Radu Cristian Alexandru and Uban, Ana Sabina and Cidota, Marina and Velicu, Horia and Rebedea, Traian and Popescu, Marius},
booktitle={Proceedings of the Natural Legal Language Processing Workshop 2021},
pages={86--94},
year={2021}
}
```
|
remi/bertabs-finetuned-extractive-abstractive-summarization | af86c661fc7f94c8526300104d4f7442cdbd1a80 | 2021-05-20T04:15:22.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | remi | null | remi/bertabs-finetuned-extractive-abstractive-summarization | 21 | null | transformers | 8,191 | Entry not found |
saburbutt/xlnet_large_tweetqa | ed48a14ba0af1780f818c98b14de2100baba899a | 2021-04-13T22:34:59.000Z | [
"pytorch",
"xlnet",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saburbutt | null | saburbutt/xlnet_large_tweetqa | 21 | null | transformers | 8,192 | |
sap-ai-research/BERT-Large-Contrastive-Self-Supervised-ACL2020 | 68db970e7f9e4d00ec4fafc13df43607e1aed9cd | 2021-05-20T04:50:14.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sap-ai-research | null | sap-ai-research/BERT-Large-Contrastive-Self-Supervised-ACL2020 | 21 | null | transformers | 8,193 | Entry not found |
sc2qa/msmarco_qa_classifier | 2b4efdfe1e6b089de60c9340eb9175ac6dffae4c | 2022-03-30T18:33:34.000Z | [
"pytorch",
"roberta",
"text-classification",
"arxiv:2109.04689",
"transformers"
] | text-classification | false | sc2qa | null | sc2qa/msmarco_qa_classifier | 21 | null | transformers | 8,194 | For details, please refer to the following links.
Github repo: https://github.com/amazon-research/SC2QA-DRIL
Paper: [Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation Learning](https://arxiv.org/pdf/2109.04689.pdf) |
shahukareem/wav2vec2-large-xlsr-53-dhivehi-v2 | 21f901058ba6daf20f130cfb4412c2d731f8433f | 2021-08-21T18:31:59.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dv",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"license:apache-2.0"
] | automatic-speech-recognition | false | shahukareem | null | shahukareem/wav2vec2-large-xlsr-53-dhivehi-v2 | 21 | 3 | transformers | 8,195 | ---
language: dv
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
---
# Wav2Vec2-Large-XLSR-53-Dhivehi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dhivehi using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "dv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi-v2")
model = Wav2Vec2ForCTC.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi-v2")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Dhivehi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "dv", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi-v2")
model = Wav2Vec2ForCTC.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi-v2")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\،\.\؟\!\'\"\–\’]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
``` |
skylord/wav2vec2-large-xlsr-hindi | c3bd6e40aadcdd3e7abf6a1ccfcef7b10447be75 | 2021-04-20T07:24:00.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:common_voice",
"dataset:indic tts",
"dataset:iiith",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | skylord | null | skylord/wav2vec2-large-xlsr-hindi | 21 | 1 | transformers | 8,196 | ---
language: hi
datasets:
- common_voice
- indic tts
- iiith
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Hindi XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
- name: Common Voice hi
type: common_voice
args: hi
- name: Indic IIT (IITM)
type: indic
args: hi
- name: IIITH Indic Dataset
type: iiith
args: hi
metrics:
- name: Custom Dataset Hindi WER
type: wer
value: 17.23
- name: CommonVoice Hindi (Test) WER
type: wer
value: 56.46
---
# Wav2Vec2-Large-XLSR-53-Hindi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Hindi using the following datasets:
- [Common Voice](https://huggingface.co/datasets/common_voice),
- [Indic TTS- IITM](https://www.iitm.ac.in/donlab/tts/index.php) and
- [IIITH - Indic Speech Datasets](http://speech.iiit.ac.in/index.php/research-svl/69.html)
The Indic datasets are well balanced across gender and accents. However the CommonVoice dataset is skewed towards male voices
Fine-tuned on facebook/wav2vec2-large-xlsr-53 using Hindi dataset :: 60 epochs >> 17.05% WER
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hi", split="test")
processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Predictions
*Some good ones ..... *
| Predictions | Reference |
|-------|-------|
|फिर वो सूरज तारे पहाड बारिश पदछड़ दिन रात शाम नदी बर्फ़ समुद्र धुंध हवा कुछ भी हो सकती है | फिर वो सूरज तारे पहाड़ बारिश पतझड़ दिन रात शाम नदी बर्फ़ समुद्र धुंध हवा कुछ भी हो सकती है |
| इस कारण जंगल में बडी दूर स्थित राघव के आश्रम में लोघ कम आने लगे और अधिकांश भक्त सुंदर के आश्रम में जाने लगे | इस कारण जंगल में बड़ी दूर स्थित राघव के आश्रम में लोग कम आने लगे और अधिकांश भक्त सुन्दर के आश्रम में जाने लगे |
| अपने बचन के अनुसार शुभमूर्त पर अनंत दक्षिणी पर्वत गया और मंत्रों का जप करके सरोवर में उतरा | अपने बचन के अनुसार शुभमुहूर्त पर अनंत दक्षिणी पर्वत गया और मंत्रों का जप करके सरोवर में उतरा |
*Some crappy stuff .... *
| Predictions | Reference |
|-------|-------|
| वस गनिल साफ़ है। | उसका दिल साफ़ है। |
| चाय वा एक कुछ लैंगे हब | चायवाय कुछ लेंगे आप |
| टॉम आधे है स्कूल हें है | टॉम अभी भी स्कूल में है |
## Evaluation
The model can be evaluated as follows on the following two datasets:
1. Custom dataset created from 20% of Indic, IIITH and CV (test): WER 17.xx%
2. CommonVoice Hindi test dataset: WER 56.xx%
Links to the datasets are provided above (check the links at the start of the README)
train-test csv files are shared on the following gdrive links:
a. IIITH [train](https://storage.googleapis.com/indic-dataset/train_test_splits/iiit_hi_train.csv) [test](https://storage.googleapis.com/indic-dataset/train_test_splits/iiit_hi_test.csv)
b. Indic TTS [train](https://storage.googleapis.com/indic-dataset/train_test_splits/indic_train_full.csv) [test](https://storage.googleapis.com/indic-dataset/train_test_splits/indic_test_full.csv)
Update the audio_path as per your local file structure.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
## Load the datasets
test_dataset = load_dataset("common_voice", "hi", split="test")
indic = load_dataset("csv", data_files= {'train':"/workspace/data/hi2/indic_train_full.csv",
"test": "/workspace/data/hi2/indic_test_full.csv"}, download_mode="force_redownload")
iiith = load_dataset("csv", data_files= {"train": "/workspace/data/hi2/iiit_hi_train.csv",
"test": "/workspace/data/hi2/iiit_hi_test.csv"}, download_mode="force_redownload")
## Pre-process datasets and concatenate to create test dataset
# Drop columns of common_voice
split = ['train', 'test', 'validation', 'other', 'invalidated']
for sp in split:
common_voice[sp] = common_voice[sp].remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'])
common_voice = common_voice.rename_column('path', 'audio_path')
common_voice = common_voice.rename_column('sentence', 'target_text')
train_dataset = datasets.concatenate_datasets([indic['train'], iiith['train'], common_voice['train']])
test_dataset = datasets.concatenate_datasets([indic['test'], iiith['test'], common_voice['test'], common_voice['validation']])
## Load model from HF hub
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\“\%\‘\”\�Utrnle\_]'
unicode_ignore_regex = r'[dceMaWpmFui\xa0\u200d]' # Some unwanted unicode chars
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["target_text"] = re.sub(chars_to_ignore_regex, '', batch["target_text"])
batch["target_text"] = re.sub(unicode_ignore_regex, '', batch["target_text"])
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result on custom dataset**: 17.23 %
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\“\%\‘\”\�Utrnle\_]'
unicode_ignore_regex = r'[dceMaWpmFui\xa0\u200d]' # Some unwanted unicode chars
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).sub(unicode_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result on CommonVoice**: 56.46 %
## Training
The Common Voice `train`, `validation`, datasets were used for training as well as
The script used for training & wandb dashboard can be found [here](https://wandb.ai/thinkevolve/huggingface/reports/Project-Hindi-XLSR-Large--Vmlldzo2MTI2MTQ)
|
sonoisa/t5-base-japanese-article-generation | 1355b9d6a603285ddba4ed9f1171e2eb69f944ab | 2022-02-21T13:37:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ja",
"transformers",
"seq2seq",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | sonoisa | null | sonoisa/t5-base-japanese-article-generation | 21 | null | transformers | 8,197 | ---
language: ja
tags:
- t5
- text2text-generation
- seq2seq
license: cc-by-sa-4.0
---
# タイトルから記事本文を生成するモデル
SEE: https://qiita.com/sonoisa/items/a9af64ff641f0bbfed44 |
spencerh/leftcenterpartisan | 69f9ba06e6d0c13a5c9b59e8fd0f85ef5693f988 | 2021-04-23T19:42:54.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | spencerh | null | spencerh/leftcenterpartisan | 21 | null | transformers | 8,198 | Entry not found |
ssmadha/gpt2-finetuned-scientific-articles | 7e10f99dbe964b0fd2d222165f50d14d036d8624 | 2021-12-14T20:47:55.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | ssmadha | null | ssmadha/gpt2-finetuned-scientific-articles | 21 | 2 | transformers | 8,199 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-scientific-articles
results: []
---
This repository is the submission for the final project for BF510 [Institutional Racism in Health and Science](http://irhs.bu.edu/) for Shariq Madha.
To see Jupyter detailing how this model was produced, as well as the motivation behind it, go [here](https://github.com/ssmadha/BF510-final-project/).
To try this out yourself, enter a prompt in the textbox to the right and hit compute (it may take a minute for the first to process, but subsequent results should be quick).
# gpt2-finetuned-scientific-articles
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on scientific articles about algorithmic bias.
It achieves the following results on the evaluation set:
- Loss: 2.3793
## Model description
This model is a casual language modeling GPT2 fine-tuned on scientific articles about algorithmic bias, in an attempt to showcase an example about correcting for algorithmic bias.
## Intended uses & limitations
This model is intended for prompts about algorithms and bias. Other prompts will yield results, but they are less likely to be influenced by the fine-tuning.
## Training and evaluation data
This model is trained on fully freely accessible articles obtained from a PubMed Central search on algorithmic bias. The pmc_result_algorithmicbias.txt file contains the list of PMC's used. Due to technical and time limitations, only fine-tuned on the introduction sections, but training on other sections is planned.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5293 | 1.0 | 1071 | 2.3892 |
| 2.4821 | 2.0 | 2142 | 2.3793 |
### Framework versions
- Transformers 4.14.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.