modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
leonadase/distilbert-base-uncased-finetuned-sem | 80803988cf0261705ea8c388d227c8983af50d88 | 2022-03-13T19:41:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:sem_eval2010_task8",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | leonadase | null | leonadase/distilbert-base-uncased-finetuned-sem | 3 | null | transformers | 22,000 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sem_eval2010_task8
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sem
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sem_eval2010_task8
type: sem_eval2010_task8
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8314317261685683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sem
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the sem_eval2010_task8 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6704
- Accuracy: 0.8314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9556 | 1.0 | 800 | 0.7859 | 0.7814 |
| 0.6136 | 2.0 | 1600 | 0.6069 | 0.8193 |
| 0.4314 | 3.0 | 2400 | 0.6179 | 0.8211 |
| 0.2315 | 4.0 | 3200 | 0.6617 | 0.8281 |
| 0.1655 | 5.0 | 4000 | 0.6704 | 0.8314 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
snoop2head/Deep-Shallow-En2Ko | 2054631bfdc0c478887c84d2638265c5b7b2c855 | 2022-03-21T00:09:29.000Z | [
"pytorch",
"transformer",
"transformers"
] | null | false | snoop2head | null | snoop2head/Deep-Shallow-En2Ko | 3 | null | transformers | 22,001 | Entry not found |
Sivakumar/distilbert-base-uncased-finetuned-squad | 5f59860ef11ad7ba7f6b205b5a979849a388b1d9 | 2022-03-13T21:52:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Sivakumar | null | Sivakumar/distilbert-base-uncased-finetuned-squad | 3 | null | transformers | 22,002 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2109 | 1.0 | 8235 | 1.2303 |
| 0.9385 | 2.0 | 16470 | 1.2412 |
| 0.7448 | 3.0 | 24705 | 1.4101 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
robertou2/roberta-base-bne-finetuned-amazon_reviews_multi | a0e68c1807c1912eb4e055bf90d91ff2cd717345 | 2022-03-14T09:17:59.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | robertou2 | null | robertou2/roberta-base-bne-finetuned-amazon_reviews_multi | 3 | null | transformers | 22,003 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.9325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2368
- Accuracy: 0.9325
## Model description
Modelo de prueba del curso NLP de 0 a 100 sesion 4
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1919 | 1.0 | 1250 | 0.1690 | 0.933 |
| 0.0972 | 2.0 | 2500 | 0.2368 | 0.9325 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
aaraki/distilbert-base-uncased-finetuned-squad | 43e2a021b0de3604d9e86226c6c49db3464ae1b9 | 2022-03-15T00:52:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | aaraki | null | aaraki/distilbert-base-uncased-finetuned-squad | 3 | null | transformers | 22,004 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2636 | 1.0 | 5533 | 1.2248 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
lijingxin/distilbert-base-uncased-finetuned-clinc | 1f738f8aec9d4b6a1807c6a920d3a2343a8e5d85 | 2022-03-14T09:09:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | lijingxin | null | lijingxin/distilbert-base-uncased-finetuned-clinc | 3 | null | transformers | 22,005 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7755
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2992 | 1.0 | 318 | 3.2969 | 0.7339 |
| 2.6329 | 2.0 | 636 | 1.8817 | 0.8235 |
| 1.5442 | 3.0 | 954 | 1.1561 | 0.8939 |
| 1.0132 | 4.0 | 1272 | 0.8595 | 0.9103 |
| 0.7953 | 5.0 | 1590 | 0.7755 | 0.9161 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.16.1
- Tokenizers 0.10.3
|
lijingxin/distilbert-base-uncased-distilled-clinc | 7bdc2344322402ce3dc98726280dba73a9f1953b | 2022-03-14T10:42:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | lijingxin | null | lijingxin/distilbert-base-uncased-distilled-clinc | 3 | null | transformers | 22,006 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9470967741935484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2782
- Accuracy: 0.9471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3365 | 1.0 | 318 | 1.6602 | 0.7361 |
| 1.2799 | 2.0 | 636 | 0.8378 | 0.8548 |
| 0.6739 | 3.0 | 954 | 0.4872 | 0.9132 |
| 0.4143 | 4.0 | 1272 | 0.3640 | 0.9352 |
| 0.3051 | 5.0 | 1590 | 0.3168 | 0.9406 |
| 0.2585 | 6.0 | 1908 | 0.2970 | 0.9442 |
| 0.235 | 7.0 | 2226 | 0.2876 | 0.9458 |
| 0.2236 | 8.0 | 2544 | 0.2824 | 0.9458 |
| 0.2168 | 9.0 | 2862 | 0.2794 | 0.9468 |
| 0.2138 | 10.0 | 3180 | 0.2782 | 0.9471 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.16.1
- Tokenizers 0.10.3
|
GPL/bioasq-distilbert-tas-b-gpl-self_miner | 4f786c5032702a8b876d79592b6c35215d5b3ae8 | 2022-03-14T14:22:31.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/bioasq-distilbert-tas-b-gpl-self_miner | 3 | null | sentence-transformers | 22,007 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
GPL/quora-distilbert-tas-b-gpl-self_miner | c66a1c6d91dbfc61a4425e41fc9dd931e650fec0 | 2022-03-14T14:24:46.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/quora-distilbert-tas-b-gpl-self_miner | 3 | null | sentence-transformers | 22,008 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
lewtun/distilhubert-finetuned-gtzan | 704a8d6ba38eac087c660fa0b9d0cfd385f5e777 | 2022-03-14T20:33:49.000Z | [
"pytorch",
"hubert",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | lewtun | null | lewtun/distilhubert-finetuned-gtzan | 3 | null | transformers | 22,009 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6310
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.99 | 56 | 1.9996 | 0.4 |
| 2.0202 | 1.99 | 112 | 1.5102 | 0.51 |
| 2.0202 | 2.99 | 168 | 1.2698 | 0.67 |
| 1.289 | 3.99 | 224 | 1.0391 | 0.73 |
| 1.289 | 4.99 | 280 | 0.8988 | 0.75 |
| 0.8787 | 5.99 | 336 | 0.7758 | 0.82 |
| 0.8787 | 6.99 | 392 | 0.6896 | 0.83 |
| 0.6254 | 7.99 | 448 | 0.6936 | 0.81 |
| 0.6254 | 8.99 | 504 | 0.6433 | 0.84 |
| 0.4879 | 9.99 | 560 | 0.6310 | 0.84 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.0+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
kurianbenoy/kde_en_ml_translation_model | 766c8a06c07bb6352d0537ac1972d3c70360fd53 | 2022-05-11T16:57:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ml",
"fastai",
"translation",
"license:mit"
] | translation | false | kurianbenoy | null | kurianbenoy/kde_en_ml_translation_model | 3 | 2 | fastai | 22,010 | ---
language:
- en
- ml
license: mit
tags:
- fastai
- translation
---
# Fine Tune En-ML translation
* source group: English
* target group: Malayalam
This is a Machine translation model created for fun to translate from English text to Malayalam which was fine-tuned for KDE-Dataset.
[Tweet](https://twitter.com/kurianbenoy2/status/1503082136009465857?s=20&t=7Hn-KUqHZRY6VJ16-i1qdA)
# Model card
## Model description
Used a fine tuned model on top of MarianMT models created by Helsinki-NLP group. The [training code is described here](https://kurianbenoy.com/ml-blog/fastai/huggingface/translation/fine%20tuning/malayalam/2022/03/12/_03_13_huggingace_translation_models.html).
## Intended uses & limitations
Intended to use just for fun, and for sake of learning
Limitations: Returns really bad predictions occasionally
|
internetoftim/upload_test | 684d36b157a2f837c6ce285d3b362792ceb1c3d2 | 2022-03-14T18:24:09.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | internetoftim | null | internetoftim/upload_test | 3 | null | transformers | 22,011 | Entry not found |
Kevincp560/pegasus-large-finetuned-Pubmed | 783ba784981c4d13d688079ca0034c4928e0c8b3 | 2022-03-14T20:57:20.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"dataset:pub_med_summarization_dataset",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Kevincp560 | null | Kevincp560/pegasus-large-finetuned-Pubmed | 3 | null | transformers | 22,012 | ---
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: pegasus-large-finetuned-Pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 39.1107
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-large-finetuned-Pubmed
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7669
- Rouge1: 39.1107
- Rouge2: 15.4127
- Rougel: 24.3729
- Rougelsum: 35.1236
- Gen Len: 226.594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.065 | 1.0 | 1000 | 1.8262 | 37.1986 | 14.3685 | 23.7153 | 33.0713 | 218.902 |
| 1.9552 | 2.0 | 2000 | 1.7933 | 38.0663 | 14.7813 | 23.8412 | 33.9574 | 217.488 |
| 1.8983 | 3.0 | 3000 | 1.7768 | 38.3975 | 15.0983 | 24.0247 | 34.314 | 222.32 |
| 1.882 | 4.0 | 4000 | 1.7687 | 39.1311 | 15.4167 | 24.2978 | 35.078 | 222.564 |
| 1.8456 | 5.0 | 5000 | 1.7669 | 39.1107 | 15.4127 | 24.3729 | 35.1236 | 226.594 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
gabitoo1234/autonlp-mut_uchile-640218740 | 34f7c4911d03031a1a6be285a9bac4bff2cd6654 | 2022-03-14T19:26:47.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:gabitoo1234/autonlp-data-mut_uchile",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | gabitoo1234 | null | gabitoo1234/autonlp-mut_uchile-640218740 | 3 | null | transformers | 22,013 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- gabitoo1234/autonlp-data-mut_uchile
co2_eq_emissions: 43.078469852595994
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 640218740
- CO2 Emissions (in grams): 43.078469852595994
## Validation Metrics
- Loss: 0.8302136063575745
- Accuracy: 0.7887341933835739
- Macro F1: 0.5756730305293746
- Micro F1: 0.7887341933835739
- Weighted F1: 0.7878942570915727
- Macro Precision: 0.620883634472996
- Micro Precision: 0.7887341933835739
- Weighted Precision: 0.8009430092038783
- Macro Recall: 0.5521761315904072
- Micro Recall: 0.7887341933835739
- Weighted Recall: 0.7887341933835739
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/gabitoo1234/autonlp-mut_uchile-640218740
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("gabitoo1234/autonlp-mut_uchile-640218740", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("gabitoo1234/autonlp-mut_uchile-640218740", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
internetoftim/demo | 9d6459a431eada1320cbb5971e66d85a4a807b98 | 2022-03-17T12:22:09.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | internetoftim | null | internetoftim/demo | 3 | null | transformers | 22,014 | Entry not found |
mansidw/finetuning-sentiment-model-12000-samples | 7f9738152927c2ca7132c717474faeb19a4a3c48 | 2022-03-15T09:40:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:ag_news",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | mansidw | null | mansidw/finetuning-sentiment-model-12000-samples | 3 | null | transformers | 22,015 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ag_news
model-index:
- name:
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-12000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
bettertextapp/m2m-tai-en-de-gen-1.2B-1k-steps | 5bd528b5e6c0eb357805c871c71c8ff9b0e66887 | 2022-03-14T20:46:00.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | bettertextapp | null | bettertextapp/m2m-tai-en-de-gen-1.2B-1k-steps | 3 | null | transformers | 22,016 | ---
tags:
- generated_from_trainer
model-index:
- name: m2m-tai-en-de-gen-1.2B-1k-steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m-tai-en-de-gen-1.2B-1k-steps
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-base-finetuned-ks | 1ecf6c326003b8b61316e32d71b7164c3cdee0e0 | 2022-03-15T17:32:51.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-base-finetuned-ks | 3 | null | transformers | 22,017 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0817
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6386 | 1.0 | 399 | 0.5305 | 0.9601 |
| 0.2358 | 2.0 | 798 | 0.1774 | 0.9747 |
| 0.1982 | 3.0 | 1197 | 0.1172 | 0.9794 |
| 0.1554 | 4.0 | 1596 | 0.0884 | 0.9835 |
| 0.1261 | 5.0 | 1995 | 0.0817 | 0.9844 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
sanchit-gandhi/wav2vec2-2-rnd-no-adapter | 111e2cf102fb7fd24355423b24766efdd2376aa4 | 2022-03-17T06:35:21.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-rnd-no-adapter | 3 | null | transformers | 22,018 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8384
- Wer: 0.1367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.2245 | 1.68 | 1500 | 6.1442 | 1.5986 |
| 5.4521 | 3.36 | 3000 | 5.4335 | 1.6439 |
| 3.3659 | 5.04 | 4500 | 3.6455 | 0.6503 |
| 1.5724 | 6.73 | 6000 | 2.3554 | 0.3386 |
| 1.4759 | 8.41 | 7500 | 1.7423 | 0.2889 |
| 1.0826 | 10.09 | 9000 | 1.3818 | 0.2209 |
| 0.6769 | 11.77 | 10500 | 1.1268 | 0.1737 |
| 0.7348 | 13.45 | 12000 | 0.9990 | 0.1575 |
| 0.5419 | 15.13 | 13500 | 0.9435 | 0.1560 |
| 0.4212 | 16.82 | 15000 | 0.8678 | 0.1405 |
| 0.3805 | 18.5 | 16500 | 0.8384 | 0.1367 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sap-ai-research/RoBERTa-base-SCD-ACL2022 | 1f67e9071623048610b976ec42fee43745ffde6c | 2022-03-16T00:41:41.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | sap-ai-research | null | sap-ai-research/RoBERTa-base-SCD-ACL2022 | 3 | null | transformers | 22,019 | ---
license: apache-2.0
---
|
clapika2010/rayyan_finetuned | dab04ccfd658de154a56b88ddfeeaac8d43ab7c6 | 2022-03-16T00:12:10.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | clapika2010 | null | clapika2010/rayyan_finetuned | 3 | null | transformers | 22,020 | Entry not found |
deepakvk/roberta-base-squad2-finetuned-squad | 1fa9400507fde61d75ce2f0aa1abcf4049af13f2 | 2022-03-16T12:50:16.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | deepakvk | null | deepakvk/roberta-base-squad2-finetuned-squad | 3 | null | transformers | 22,021 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-squad2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-finetuned-squad
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.01
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ixa-ehu/roberta-eus-euscrawl-base-cased | 41f92c95224f18977000e212ae16074f54c4cc63 | 2022-03-16T11:48:42.000Z | [
"pytorch",
"roberta",
"fill-mask",
"eu",
"arxiv:2203.08111",
"transformers",
"basque",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | fill-mask | false | ixa-ehu | null | ixa-ehu/roberta-eus-euscrawl-base-cased | 3 | null | transformers | 22,022 | ---
language: eu
license: cc-by-nc-4.0
tags:
- basque
- roberta
---
# Roberta-eus Euscrawl base cased
This is a RoBERTa model for Basque model presented in [Does corpus quality really matter for low-resource languages?](https://arxiv.org/abs/2203.08111). There are several models for Basque using the RoBERTa architecture, which are pre-trained using different corpora:
- roberta-eus-euscrawl-base-cased: Basque RoBERTa trained on Euscrawl, a corpus created using tailored crawling from Basque sites. EusCrawl contains 12,528k documents and 423M tokens.
- roberta-eus-euscrawl-large-cased: Basque RoBERTa large trained on EusCrawl.
- roberta-eus-mC4-base-cased: Basque RoBERTa trained on the Basque portion of mc4 dataset.
- roberta-eus-CC100-base-cased: Basque RoBERTa trained on Basque portion of cc100 dataset.
The models have been tested on five different downstream tasks for Basque: Topic classification, Sentiment analysis, Stance detection, Named Entity Recognition (NER), and Question Answering (refer to the [paper](https://arxiv.org/abs/2203.08111) for more details). See summary of results below:
| Model | Topic class. | Sentiment | Stance det. | NER | QA | Average |
|----------------------------------|--------------|-----------|-------------|----------|----------|----------|
| roberta-eus-euscrawl-base-cased | 76.2 | 77.7 | 57.4 | 86.8 | 34.6 | 66.5 |
| roberta-eus-euscrawl-large-cased | **77.6** | 78.8 | 62.9 | **87.2** | **38.3** | **69.0** |
| roberta-eus-mC4-base-cased | 75.3 | **80.4** | 59.1 | 86.0 | 35.2 | 67.2 |
| roberta-eus-CC100-base-cased | 76.2 | 78.8 | **63.4** | 85.2 | 35.8 | 67.9 |
If you use any of these models, please cite the following paper:
```
@misc{artetxe2022euscrawl,
title={Does corpus quality really matter for low-resource languages?},
author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
Olatz Perez-de-Viñaspre, Aitor Soroa},
year={2022},
eprint={2203.08111},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ai4bharat/MultiIndicWikiBioSS | c48c3bd663d83d2bda576ab585b52147eb0418ae | 2022-03-29T09:22:47.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"as",
"bn",
"hi",
"kn",
"ml",
"or",
"pa",
"ta",
"te",
"dataset:ai4bharat/IndicWikiBio",
"arxiv:2203.05437",
"transformers",
"wikibio",
"multilingual",
"nlp",
"indicnlp",
"autotrain_compatible"
] | text2text-generation | false | ai4bharat | null | ai4bharat/MultiIndicWikiBioSS | 3 | null | transformers | 22,023 | ---
tags:
- wikibio
- multilingual
- nlp
- indicnlp
datasets:
- ai4bharat/IndicWikiBio
language:
- as
- bn
- hi
- kn
- ml
- or
- pa
- ta
- te
licenses:
- cc-by-nc-4.0
widget:
- text: <TAG> name </TAG> राम नरेश पांडेय <TAG> office </TAG> विधायक - 205 - कुशीनगर विधान सभा निर्वाचन क्षेत्र , उत्तर प्रदेश <TAG> term </TAG> 1967 से 1968 <TAG> nationality </TAG> भारतीय </s> <2hi>
---
# MultiIndicWikiBioSS
MultiIndicWikiBioSS is a multilingual, sequence-to-sequence pre-trained model, a [IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS) checkpoint fine-tuned on the 9 languages of [IndicWikiBio](https://huggingface.co/datasets/ai4bharat/IndicWikiBio) dataset. For fine-tuning details,
see the [paper](https://arxiv.org/abs/2203.05437). You can use MultiIndicWikiBioSS to build biography generation applications for Indian languages by fine-tuning the model with supervised training data. Some salient features of the MultiIndicWikiBioSS are:
<ul>
<li >Supported languages: Assamese, Bengali, Hindi, Oriya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
<li> Fine-tuned on an Indic language corpora (34,653 examples). </li>
<li> Unlike ai4bharat/MultiIndicWikiBioUnified, each language is written in its own script, so you do not need to perform any script mapping to/from Devanagari. </li>
</ul>
You can read more about MultiIndicWikiBioSS in this <a href="https://arxiv.org/abs/2203.05437">paper</a>.
## Using this model in `transformers`
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioSS", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioSS", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicWikiBioSS")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicWikiBioSS")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2hi>', '<2kn>', '<2ml>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("<TAG> name </TAG> भीखा लाल <TAG> office </TAG> विधायक - 318 - हसनगंज विधान सभा निर्वाचन क्षेत्र , उत्तर प्रदेश <TAG> term </TAG> 1957 से 1962 <TAG> nationality </TAG> भारतीय</s><2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
out = tokenizer("<2hi> भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे। </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
# For loss
model_outputs.loss ## This is not label smoothed.
# For logits
model_outputs.logits
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero
model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # __भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे।
```
## Benchmarks
Scores on the `IndicWikiBio` test sets are as follows:
Language | RougeL
---------|----------------------------
as | 56.50
bn | 56.58
hi | 67.34
kn | 39.37
ml | 38.42
or | 70.71
pa | 52.78
ta | 51.11
te | 51.72
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
# License
The model is available under the MIT License. |
newtonkwan/gpt2-xl-ft-with-non-challenging-0.8 | 56a8c58a437f3b064d052d314d97ec138cef2d61 | 2022-03-16T13:27:02.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | newtonkwan | null | newtonkwan/gpt2-xl-ft-with-non-challenging-0.8 | 3 | null | transformers | 22,024 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-with-non-challenging-0.8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-with-non-challenging-0.8
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 5.4443 |
| No log | 2.0 | 2 | 5.4221 |
| No log | 3.0 | 3 | 5.3779 |
| No log | 4.0 | 4 | 5.3121 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 10 |
nqcccccc/phobert-asba-qab | fabfedf152fec957132f7559d1fdb7bdb5561f30 | 2022-03-16T15:53:43.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | nqcccccc | null | nqcccccc/phobert-asba-qab | 3 | 0 | transformers | 22,025 | |
zdepablo/distilbert-base-uncased-finetuned-clinc | 2df99fdc26134b0d3b94aa4d475ab512e404cfd2 | 2022-03-16T23:33:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | zdepablo | null | zdepablo/distilbert-base-uncased-finetuned-clinc | 3 | null | transformers | 22,026 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7712
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2892 | 1.0 | 318 | 3.2831 | 0.7429 |
| 2.6246 | 2.0 | 636 | 1.8742 | 0.8326 |
| 1.5444 | 3.0 | 954 | 1.1526 | 0.8939 |
| 1.0097 | 4.0 | 1272 | 0.8568 | 0.9106 |
| 0.7929 | 5.0 | 1590 | 0.7712 | 0.9174 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
negfir/distilbert-base-uncased-finetuned-qnli | 727af72a35c382153e546c7a7a5991805c93742d | 2022-03-17T03:59:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | negfir | null | negfir/distilbert-base-uncased-finetuned-qnli | 3 | null | transformers | 22,027 | Entry not found |
KoichiYasuoka/roberta-small-belarusian-upos | bfff1310a81dc94b8f4e12996db256faecdcac51 | 2022-05-07T13:33:36.000Z | [
"pytorch",
"roberta",
"token-classification",
"be",
"dataset:universal_dependencies",
"transformers",
"belarusian",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-small-belarusian-upos | 3 | null | transformers | 22,028 | ---
language:
- "be"
tags:
- "belarusian"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
---
# roberta-small-belarusian-upos
## Model Description
This is a RoBERTa model pre-trained with [UD_Belarusian](https://universaldependencies.org/be/) for POS-tagging and dependency-parsing, derived from [roberta-small-belarusian](https://huggingface.co/KoichiYasuoka/roberta-small-belarusian). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-belarusian-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-small-belarusian-upos")
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-small-belarusian-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
cammy/led-large-16384-arxiv-100-lit-evalMA-MDS1 | c9402a68fb3e4465f6ac98dafd41dc1c104081b1 | 2022-03-17T10:03:50.000Z | [
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/led-large-16384-arxiv-100-lit-evalMA-MDS1 | 3 | null | transformers | 22,029 | Entry not found |
sanchit-gandhi/wav2vec2-2-gpt2-no-adapter-regularisation | d38079fe6de5d36361370bbb6b7fa49bb1fca869 | 2022-03-19T17:43:39.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-gpt2-no-adapter-regularisation | 3 | null | transformers | 22,030 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7494
- Wer: 1.0532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4828 | 2.8 | 2500 | 4.0554 | 1.7873 |
| 0.8683 | 5.61 | 5000 | 2.5401 | 1.3156 |
| 0.4394 | 8.41 | 7500 | 1.7519 | 1.1129 |
| 0.0497 | 11.21 | 10000 | 1.7102 | 1.0738 |
| 0.031 | 14.01 | 12500 | 1.7395 | 1.0512 |
| 0.0508 | 16.82 | 15000 | 1.7254 | 1.0463 |
| 0.0462 | 19.62 | 17500 | 1.7494 | 1.0532 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
juns/imdb_finetuned_distilbert-base-uncased-finetuned-sst-2-english | 6af2d0b6e2229406e1cfa941c1f0c240e9464e54 | 2022-06-10T07:37:10.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | juns | null | juns/imdb_finetuned_distilbert-base-uncased-finetuned-sst-2-english | 3 | null | transformers | 22,031 | imdb_finetuned_distilbert-base-uncased-finetuned-sst-2-english for boostcamp ai tech 3
|
groversakshi1998/vul | 4fd285590b36a08b4171640d1c7961fcb7e2abb1 | 2022-03-18T17:13:55.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | groversakshi1998 | null | groversakshi1998/vul | 3 | null | transformers | 22,032 | Entry not found |
SophieTr/Reward_training_Pegasus_reddit | 1998fe245e9992921fffa6b6ab87e2e0c3511513 | 2022-04-13T10:07:43.000Z | [
"pytorch",
"pegasus",
"feature-extraction",
"transformers"
] | feature-extraction | false | SophieTr | null | SophieTr/Reward_training_Pegasus_reddit | 3 | null | transformers | 22,033 | Entry not found |
facebook/regnet-x-120 | 3bc20d8ddd65429667acb8e1de4c64290ad31373 | 2022-06-28T15:40:50.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-x-120 | 3 | null | transformers | 22,034 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
facebook/regnet-y-080 | 878bf14306c150dbf346b0a22ecd1b996a13a1b1 | 2022-06-30T10:14:19.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-y-080 | 3 | null | transformers | 22,035 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
beston91/gpt2-xl_ft_mult_10k | db8503a70e125fdce58ac20d2d07f7fb9da6bbf4 | 2022-03-20T22:27:58.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | beston91 | null | beston91/gpt2-xl_ft_mult_10k | 3 | null | transformers | 22,036 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_mult_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_mult_10k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 54 | 1.3358 |
| No log | 1.99 | 108 | 0.7486 |
| No log | 2.99 | 162 | 0.6997 |
| No log | 3.99 | 216 | 0.6916 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 25.89222526550293
### Dataset Size
Size: 5000 |
facebook/regnet-y-640-seer-in1k | d06c7ac54598e22c0f4b08e1f68998fb593a130c | 2022-03-31T12:05:50.000Z | [
"pytorch",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2202.08360",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-y-640-seer-in1k | 3 | null | transformers | 22,037 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision](https://arxiv.org/abs/2202.08360) and first released in [this repository](https://github.com/facebookresearch/vissl/tree/main/projects/SEER).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors trained [RegNets](https://huggingface.co/?models=regnet) models in a self-supervised fashion on bilion of random images from the internet. This model is later finetuned on ImageNet

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
ShahafAricha/nqg-custom-bert2gpt-with-bert-finetuned | 8b4cc18faf81da635d932c1c0be92313ceb7d33f | 2022-03-18T21:16:12.000Z | [
"pytorch",
"encoder-decoder",
"transformers",
"license:other"
] | null | false | ShahafAricha | null | ShahafAricha/nqg-custom-bert2gpt-with-bert-finetuned | 3 | null | transformers | 22,038 | ---
license: other
---
|
ShahafAricha/nqg-gpt2 | 250a91e2a50d7184a3f0cf03dd49cccfc407cd3f | 2022-03-19T17:20:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:other"
] | text-generation | false | ShahafAricha | null | ShahafAricha/nqg-gpt2 | 3 | null | transformers | 22,039 | ---
license: other
---
---
datasets:
- squad
tags:
- question-generation
widget:
- text: "The Technikum was conceived in the early 1900s by the German-Jewish fund Ezrah as a school of [HL]engineering and sciences[HL].[SEP]"
---
# Transformer QG on SQuAD
HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/)
**This is a Reproduce Version from distilled squad dataset**
More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD)
## Usage
### Input Format
```
C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|]
``` |
ShahafAricha/nqg-custom-bert2gpt-with-bert-uncased | 34780d5ece1fab3ac433e5fa480ed5588ea3d1c6 | 2022-03-19T01:51:07.000Z | [
"pytorch",
"encoder-decoder",
"transformers",
"license:other"
] | null | false | ShahafAricha | null | ShahafAricha/nqg-custom-bert2gpt-with-bert-uncased | 3 | null | transformers | 22,040 | ---
license: other
---
|
Pavithra/code-parrot | 64138d9eaea6ec3eb003e6e53f80f5d224a9f9fc | 2022-03-19T04:04:29.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Pavithra | null | Pavithra/code-parrot | 3 | null | transformers | 22,041 | # CodeParrot 🦜 (small)
CodeParrot 🦜 is a GPT-2 model (110M parameters) trained to generate Python code.
## Usage
You can load the CodeParrot model and tokenizer directly in `transformers`:
```Python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("lvwerra/codeparrot-small")
model = AutoModelWithLMHead.from_pretrained("lvwerra/codeparrot-small")
inputs = tokenizer("def hello_world():", return_tensors="pt")
outputs = model(**inputs)
```
or with a `pipeline`:
```Python
from transformers import pipeline
pipe = pipeline("text-generation", model="lvwerra/codeparrot-small")
outputs = pipe("def hello_world():")
```
## Training
The model was trained on the cleaned [CodeParrot 🦜 dataset](https://huggingface.co/datasets/lvwerra/codeparrot-clean) with the following settings:
|Config|Value|
|-------|-----|
|Batch size| 192 |
|Context size| 1024 |
|Training steps| 150'000|
|Gradient accumulation| 1|
|Gradient checkpointing| False|
|Learning rate| 5e-4 |
|Weight decay | 0.1 |
|Warmup steps| 2000 |
|Schedule| Cosine |
The training was executed on 16 x A100 (40GB) GPUs. This setting amounts to roughly 29 billion tokens.
## Performance
We evaluated the model on OpenAI's [HumanEval](https://huggingface.co/datasets/openai_humaneval) benchmark which consists of programming challenges:
| Metric | Value |
|-------|-----|
|pass@1 | 3.80% |
|pass@10 | 6.57% |
|pass@100 | 12.78% |
The [pass@k metric](https://huggingface.co/metrics/code_eval) tells the probability that at least one out of k generations passes the tests.
## Resources
- Dataset: [full](https://huggingface.co/datasets/lvwerra/codeparrot-clean), [train](https://huggingface.co/datasets/lvwerra/codeparrot-clean-train), [valid](https://huggingface.co/datasets/lvwerra/codeparrot-clean-valid)
- Code: [repository](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot)
- Spaces: [generation](), [highlighting]() |
mansidw/fake-tipping-6000-samples | c3aa367990c8b8e1b517c52011f68e5e56ff8f12 | 2022-03-19T09:46:11.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | mansidw | null | mansidw/fake-tipping-6000-samples | 3 | null | transformers | 22,042 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: fake-tipping-6000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fake-tipping-6000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ShengdingHu/TriviaQA_T5-large_LoRA | 84588c1f0362c76010d8b13a7c8d48302e6e61b3 | 2022-03-19T16:42:54.000Z | [
"pytorch",
"transformers"
] | null | false | ShengdingHu | null | ShengdingHu/TriviaQA_T5-large_LoRA | 3 | null | transformers | 22,043 | Entry not found |
msamogh/autonlp-cai-out-of-scope-649919112 | fc32dea7abecfacb139de54ceed36be10e8255f2 | 2022-03-19T21:40:41.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:msamogh/autonlp-data-cai-out-of-scope",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | msamogh | null | msamogh/autonlp-cai-out-of-scope-649919112 | 3 | null | transformers | 22,044 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- msamogh/autonlp-data-cai-out-of-scope
co2_eq_emissions: 0.49924480682533606
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 649919112
- CO2 Emissions (in grams): 0.49924480682533606
## Validation Metrics
- Loss: 0.49354293942451477
- Accuracy: 0.8064516129032258
- Precision: 0.8181818181818182
- Recall: 0.9
- AUC: 0.8689393939393939
- F1: 0.8571428571428572
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/msamogh/autonlp-cai-out-of-scope-649919112
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919112", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919112", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
doppel-neo/hubert-large-ami-shard-experiment-colab | 55070c826a63c2dd0a5462e8cdec66b35da32df7 | 2022-03-29T00:39:37.000Z | [
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | doppel-neo | null | doppel-neo/hubert-large-ami-shard-experiment-colab | 3 | null | transformers | 22,045 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: hubert-large-ami-shard-experiment-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-large-ami-shard-experiment-colab
This model is a fine-tuned version of [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: nan
- eval_wer: 1.0
- eval_runtime: 6.0682
- eval_samples_per_second: 16.479
- eval_steps_per_second: 2.142
- epoch: 1.02
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
cammy/pegasus-cnn_dailymail-1000-lit-evalMA-ga | f953f3989441a776219e38ca9d90916da8d75888 | 2022-03-20T14:36:20.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/pegasus-cnn_dailymail-1000-lit-evalMA-ga | 3 | null | transformers | 22,046 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-cnn_dailymail-1000-lit-evalMA-ga
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-cnn_dailymail-1000-lit-evalMA-ga
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6852
- Rouge1: 25.789
- Rouge2: 11.0694
- Rougel: 20.7716
- Rougelsum: 22.4851
- Gen Len: 46.32
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 250 | 1.7061 | 25.8286 | 10.8156 | 20.9502 | 22.6588 | 44.36 |
| 1.4533 | 2.0 | 500 | 1.6876 | 26.0862 | 11.5197 | 21.1282 | 23.0963 | 45.65 |
| 1.4533 | 3.0 | 750 | 1.6852 | 25.789 | 11.0694 | 20.7716 | 22.4851 | 46.32 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cammy/pegasus-cnn_dailymail-1000-lit-evalMA-ga1 | c2aedf8d87d5d496e20a6f81882cabea3798818b | 2022-03-20T16:07:53.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/pegasus-cnn_dailymail-1000-lit-evalMA-ga1 | 3 | null | transformers | 22,047 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-cnn_dailymail-1000-lit-evalMA-ga1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-cnn_dailymail-1000-lit-evalMA-ga1
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6852
- Rouge1: 25.8242
- Rouge2: 11.1309
- Rougel: 20.7946
- Rougelsum: 22.5591
- Gen Len: 46.32
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 250 | 1.7061 | 25.8547 | 10.8573 | 20.8419 | 22.5942 | 44.36 |
| 1.4533 | 2.0 | 500 | 1.6876 | 26.105 | 11.5635 | 21.132 | 23.044 | 45.65 |
| 1.4533 | 3.0 | 750 | 1.6852 | 25.8242 | 11.1309 | 20.7946 | 22.5591 | 46.32 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
PSW/ut-del-two-at-once-ver2 | 4791b9487575564ccbfb6b457191d7c4f7a9c8f8 | 2022-03-21T05:50:32.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut-del-two-at-once-ver2 | 3 | null | transformers | 22,048 | Entry not found |
Ameer05/bart-large-finetuned-resume-summarizer-bathcsize-8-epoch-9 | b71c77d7e8cb68186b57ff2c9b1f715e89d33ee0 | 2022-03-21T07:52:32.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | Ameer05 | null | Ameer05/bart-large-finetuned-resume-summarizer-bathcsize-8-epoch-9 | 3 | null | transformers | 22,049 | ---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-finetuned-resume-summarizer-bathcsize-8-epoch-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-resume-summarizer-bathcsize-8-epoch-9
This model is a fine-tuned version of [Ameer05/tokenizer-repo](https://huggingface.co/Ameer05/tokenizer-repo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5988
- Rouge1: 54.4865
- Rouge2: 45.2321
- Rougel: 50.0237
- Rougelsum: 53.2463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.3463 | 1.0 | 44 | 2.0015 | 50.2382 | 40.3332 | 45.6831 | 49.1811 |
| 0.2771 | 2.0 | 88 | 2.0433 | 58.3265 | 50.1555 | 54.3681 | 56.9592 |
| 0.172 | 3.0 | 132 | 2.2077 | 55.9801 | 47.6352 | 51.9102 | 54.3347 |
| 0.1251 | 4.0 | 176 | 2.1834 | 53.3525 | 44.2643 | 49.9253 | 52.0145 |
| 0.0901 | 5.0 | 220 | 2.2857 | 56.7259 | 46.7879 | 52.3245 | 55.16 |
| 0.0506 | 6.0 | 264 | 2.5131 | 53.8128 | 44.9024 | 50.4617 | 52.8586 |
| 0.0434 | 7.0 | 308 | 2.5274 | 52.076 | 41.8135 | 47.3822 | 50.2634 |
| 0.0269 | 8.0 | 352 | 2.6374 | 54.7639 | 45.51 | 50.2608 | 53.6006 |
| 0.0147 | 9.0 | 396 | 2.5988 | 54.4865 | 45.2321 | 50.0237 | 53.2463 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
danyaljj/gpt-j-6B-step-338500 | 33df4caf2ae37d2340c49babbdde9c5c6cb30c9b | 2022-03-22T23:11:10.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt-j-6B-step-338500 | 3 | null | transformers | 22,050 | Entry not found |
danyaljj/gpt-j-6B-step-348500 | 6b0819deab2da7da0ab19fcb1d6ca1f0cf191b26 | 2022-03-22T23:09:30.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt-j-6B-step-348500 | 3 | null | transformers | 22,051 | Entry not found |
danyaljj/gpt-j-6B-step-358500 | 52174f25a3c2c34eaf02bac6b846e6f0fdd91900 | 2022-03-22T23:11:27.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt-j-6B-step-358500 | 3 | null | transformers | 22,052 | Entry not found |
danyaljj/gpt-j-6B-step-384000 | 06b760982498b54291fd3aad58e1e42e47f27ff0 | 2022-03-22T23:10:24.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt-j-6B-step-384000 | 3 | null | transformers | 22,053 | Entry not found |
Taekyoon/unicon_v0.5.2_alpha | 17b20840fb9a7ce3b9fb24481bb0f93ea3d262d1 | 2022-03-22T04:03:23.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Taekyoon | null | Taekyoon/unicon_v0.5.2_alpha | 3 | null | transformers | 22,054 | Entry not found |
edwardjross/xlm-roberta-base-finetuned-panx-it | ff349097ffd8602d1b2a14a7afd527bcc971ff07 | 2022-03-22T13:30:39.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | edwardjross | null | edwardjross/xlm-roberta-base-finetuned-panx-it | 3 | null | transformers | 22,055 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8330592105263157
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2532
- F1: 0.8331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6951 | 1.0 | 105 | 0.2967 | 0.7682 |
| 0.2824 | 2.0 | 210 | 0.2569 | 0.8201 |
| 0.1724 | 3.0 | 315 | 0.2532 | 0.8331 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mukayese/mt5-base-turkish-sum | 89b8bd053256c85e56de13a9fa50c97dc3709d7e | 2022-03-22T14:32:20.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"dataset:mlsum",
"arxiv:2203.01215",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mukayese | null | mukayese/mt5-base-turkish-sum | 3 | 1 | transformers | 22,056 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mlsum
metrics:
- rouge
model-index:
- name: mt5-base-turkish-sum
results:
- task:
name: Summarization
type: summarization
dataset:
name: mlsum tu
type: mlsum
args: tu
metrics:
- name: Rouge1
type: rouge
value: 47.4222
---
# [Mukayese: Turkish NLP Strikes Back](https://arxiv.org/abs/2203.01215)
## Summarization: mukayese/mbart-large-turkish-sum
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the mlsum/tu dataset.
It achieves the following results on the evaluation set:
- Rouge1: 47.4222
- Rouge2: 34.8624
- Rougel: 42.2487
- Rougelsum: 43.9494
Check [this](https://arxiv.org/abs/2203.01215) paper for more details on the model and the dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- label_smoothing_factor: 0.1
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.2+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
### Citation
```
@misc{safaya-etal-2022-mukayese,
title={Mukayese: Turkish NLP Strikes Back},
author={Ali Safaya and Emirhan Kurtuluş and Arda Göktoğan and Deniz Yuret},
year={2022},
eprint={2203.01215},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
buddhist-nlp/english-tibetan | 71783a72e273d2be90c586542a063b3f8de4f800 | 2022-03-22T20:42:01.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | buddhist-nlp | null | buddhist-nlp/english-tibetan | 3 | null | transformers | 22,057 | |
mmohamme/distilbert-base-uncased-finetuned-btc_2_ue | 570e586d269268eff5545c41151a107ec9bcc667 | 2022-04-05T23:54:51.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | mmohamme | null | mmohamme/distilbert-base-uncased-finetuned-btc_2_ue | 3 | null | transformers | 22,058 | Entry not found |
g4ry/classification_experiment | a3cea6de40ca1020ced2a99fff0532c2f9366b48 | 2022-03-23T14:47:02.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | g4ry | null | g4ry/classification_experiment | 3 | null | transformers | 22,059 | Entry not found |
rajeshradhakrishnan/malayalam_news_classifier | f8aac6cf599a76caa42b3a40e51aac40c69e139f | 2022-03-23T06:04:12.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | rajeshradhakrishnan | null | rajeshradhakrishnan/malayalam_news_classifier | 3 | null | transformers | 22,060 | Entry not found |
cammy/led-large-16384-arxiv-100-MDS-global | 8fe34e2c356166282175689a6ccc9562aba6656d | 2022-03-23T07:05:45.000Z | [
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/led-large-16384-arxiv-100-MDS-global | 3 | null | transformers | 22,061 | Entry not found |
Gare/opus-mt-en-ro-finetuned-en-to-ro | fdb442380b6739a8490f86e720cef340758e455d | 2022-03-23T12:51:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Gare | null | Gare/opus-mt-en-ro-finetuned-en-to-ro | 3 | null | transformers | 22,062 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.0527
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2878
- Bleu: 28.0527
- Gen Len: 34.079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7445 | 1.0 | 38145 | 1.2878 | 28.0527 | 34.079 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
abdusahmbzuai/aradia-ctc-v1 | ce169be1b9328ca91c22c3d639e8e19f53250838 | 2022-03-30T13:48:41.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"abdusahmbzuai/arabic_speech_massive_300hrs",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | abdusahmbzuai | null | abdusahmbzuai/aradia-ctc-v1 | 3 | null | transformers | 22,063 | ---
tags:
- automatic-speech-recognition
- abdusahmbzuai/arabic_speech_massive_300hrs
- generated_from_trainer
model-index:
- name: aradia-ctc-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aradia-ctc-v1
This model is a fine-tuned version of [/l/users/abdulwahab.sahyoun/aradia/aradia-ctc-v1](https://huggingface.co//l/users/abdulwahab.sahyoun/aradia/aradia-ctc-v1) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_300HRS - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7171
- Wer: 0.3336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.22 | 100 | 5.1889 | 1.0 |
| No log | 0.43 | 200 | 3.1129 | 1.0 |
| No log | 0.65 | 300 | 3.0503 | 1.0 |
| No log | 0.87 | 400 | 3.0279 | 1.0 |
| 6.2756 | 1.09 | 500 | 2.9965 | 1.0 |
| 6.2756 | 1.3 | 600 | 2.3618 | 0.9993 |
| 6.2756 | 1.52 | 700 | 1.2715 | 0.8758 |
| 6.2756 | 1.74 | 800 | 0.9971 | 0.7156 |
| 6.2756 | 1.96 | 900 | 0.8927 | 0.6382 |
| 1.712 | 2.17 | 1000 | 0.8252 | 0.5926 |
| 1.712 | 2.39 | 1100 | 0.7794 | 0.5434 |
| 1.712 | 2.61 | 1200 | 0.7557 | 0.5092 |
| 1.712 | 2.83 | 1300 | 0.7347 | 0.5203 |
| 1.712 | 3.04 | 1400 | 0.7189 | 0.4929 |
| 0.9305 | 3.26 | 1500 | 0.6820 | 0.4595 |
| 0.9305 | 3.48 | 1600 | 0.6792 | 0.4504 |
| 0.9305 | 3.69 | 1700 | 0.6596 | 0.4442 |
| 0.9305 | 3.91 | 1800 | 0.6756 | 0.4432 |
| 0.9305 | 4.13 | 1900 | 0.6663 | 0.4392 |
| 0.737 | 4.35 | 2000 | 0.6479 | 0.4372 |
| 0.737 | 4.56 | 2100 | 0.6353 | 0.4203 |
| 0.737 | 4.78 | 2200 | 0.6251 | 0.4088 |
| 0.737 | 5.0 | 2300 | 0.6209 | 0.4177 |
| 0.737 | 5.22 | 2400 | 0.6639 | 0.4094 |
| 0.6247 | 5.43 | 2500 | 0.6408 | 0.3970 |
| 0.6247 | 5.65 | 2600 | 0.6373 | 0.3932 |
| 0.6247 | 5.87 | 2700 | 0.6411 | 0.3928 |
| 0.6247 | 6.09 | 2800 | 0.6378 | 0.3897 |
| 0.6247 | 6.3 | 2900 | 0.6396 | 0.3929 |
| 0.5443 | 6.52 | 3000 | 0.6544 | 0.3864 |
| 0.5443 | 6.74 | 3100 | 0.6218 | 0.3786 |
| 0.5443 | 6.96 | 3200 | 0.6200 | 0.3784 |
| 0.5443 | 7.17 | 3300 | 0.6157 | 0.3791 |
| 0.5443 | 7.39 | 3400 | 0.6317 | 0.3798 |
| 0.4845 | 7.61 | 3500 | 0.6540 | 0.3771 |
| 0.4845 | 7.83 | 3600 | 0.6436 | 0.3670 |
| 0.4845 | 8.04 | 3700 | 0.6335 | 0.3695 |
| 0.4845 | 8.26 | 3800 | 0.6579 | 0.3610 |
| 0.4845 | 8.48 | 3900 | 0.6170 | 0.3613 |
| 0.4279 | 8.69 | 4000 | 0.6523 | 0.3617 |
| 0.4279 | 8.91 | 4100 | 0.6349 | 0.3577 |
| 0.4279 | 9.13 | 4200 | 0.6344 | 0.3673 |
| 0.4279 | 9.35 | 4300 | 0.6215 | 0.3641 |
| 0.4279 | 9.56 | 4400 | 0.6513 | 0.3608 |
| 0.3825 | 9.78 | 4500 | 0.6386 | 0.3605 |
| 0.3825 | 10.0 | 4600 | 0.6724 | 0.3549 |
| 0.3825 | 10.22 | 4700 | 0.6776 | 0.3602 |
| 0.3825 | 10.43 | 4800 | 0.6739 | 0.3544 |
| 0.3825 | 10.65 | 4900 | 0.6688 | 0.3557 |
| 0.3477 | 10.87 | 5000 | 0.6674 | 0.3564 |
| 0.3477 | 11.09 | 5100 | 0.6786 | 0.3476 |
| 0.3477 | 11.3 | 5200 | 0.6818 | 0.3478 |
| 0.3477 | 11.52 | 5300 | 0.6874 | 0.3470 |
| 0.3477 | 11.74 | 5400 | 0.6993 | 0.3424 |
| 0.3101 | 11.96 | 5500 | 0.6950 | 0.3404 |
| 0.3101 | 12.17 | 5600 | 0.6872 | 0.3406 |
| 0.3101 | 12.39 | 5700 | 0.6846 | 0.3424 |
| 0.3101 | 12.61 | 5800 | 0.7051 | 0.3405 |
| 0.3101 | 12.83 | 5900 | 0.7051 | 0.3378 |
| 0.2859 | 13.04 | 6000 | 0.6955 | 0.3403 |
| 0.2859 | 13.26 | 6100 | 0.7115 | 0.3390 |
| 0.2859 | 13.48 | 6200 | 0.7074 | 0.3384 |
| 0.2859 | 13.69 | 6300 | 0.7002 | 0.3376 |
| 0.2859 | 13.91 | 6400 | 0.7171 | 0.3360 |
| 0.2714 | 14.13 | 6500 | 0.7193 | 0.3341 |
| 0.2714 | 14.35 | 6600 | 0.7132 | 0.3347 |
| 0.2714 | 14.56 | 6700 | 0.7184 | 0.3353 |
| 0.2714 | 14.78 | 6800 | 0.7171 | 0.3331 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Graphcore/roberta-base-squad2 | 0c6a5fb56d084dde2538bab108d23e79dfe2b23f | 2022-05-25T18:25:20.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad_v2",
"arxiv:1907.11692",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Graphcore | null | Graphcore/roberta-base-squad2 | 3 | null | transformers | 22,064 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-squad2
results: []
---
# Graphcore/roberta-base-squad2
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained.
It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.
As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.
Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf)
## Intended uses & limitations
This model is a fine-tuned version of [HuggingFace/roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
## Training and evaluation data
Trained and evaluated on the SQuAD v2 dataset:
- [HuggingFace/squad_v2](https://huggingface.co/datasets/squad_v2).
## Training procedure
Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore).
Command line:
```
python examples/question-answering/run_qa.py \
--ipu_config_name Graphcore/roberta-base-ipu \
--model_name_or_path roberta-base \
--dataset_name squad_v2 \
--version_2_with_negative \
--do_train \
--do_eval \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 2 \
--pod_type pod16 \
--learning_rate 7e-5 \
--max_seq_length 384 \
--doc_stride 128 \
--seed 1984 \
--lr_scheduler_type linear \
--loss_scaling 64 \
--weight_decay 0.01 \
--warmup_ratio 0.2 \
--logging_steps 1 \
--save_steps -1 \
--dataloader_num_workers 64 \
--output_dir roberta-base-squad2 \
--overwrite_output_dir \
--push_to_hub
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 1984
- distributed_type: IPU
- total_train_batch_size: 256
- total_eval_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 3.0
- training precision: Mixed Precision
### Training results
```
***** train metrics *****
epoch = 3.0
train_loss = 0.9982
train_runtime = 0:04:44.21
train_samples = 131823
train_samples_per_second = 1391.43
train_steps_per_second = 5.425
***** eval metrics *****
epoch = 3.0
eval_HasAns_exact = 78.1208
eval_HasAns_f1 = 84.6569
eval_HasAns_total = 5928
eval_NoAns_exact = 82.0353
eval_NoAns_f1 = 82.0353
eval_NoAns_total = 5945
eval_best_exact = 80.0809
eval_best_exact_thresh = 0.0
eval_best_f1 = 83.3442
eval_best_f1_thresh = 0.0
eval_exact = 80.0809
eval_f1 = 83.3442
eval_samples = 12165
eval_total = 11873
```
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
groversakshi1998/vul_cwe | 8952bcd469cb5946795792a0c3617c3a4de6156c | 2022-03-23T13:45:24.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | groversakshi1998 | null | groversakshi1998/vul_cwe | 3 | null | transformers | 22,065 | Entry not found |
PSW/ut_del_n_per_each_ver1_2epoch | 301847c236ac908c2269817bc0b174d7d9210782 | 2022-03-23T17:11:38.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_n_per_each_ver1_2epoch | 3 | null | transformers | 22,066 | Entry not found |
yy642/bert-base-uncased-finetuned-mnli-max-length-256-epoch-5 | 211a3a936d1afa7debc41820dc9f674819258c74 | 2022-03-24T04:48:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-mnli-max-length-256-epoch-5 | 3 | null | transformers | 22,067 | Entry not found |
rurupang/roberta-base-finetuned-sts-accuracy | bda5f611aed4b3bffbf2250346c6e0581b4686ea | 2022-03-24T07:31:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | rurupang | null | rurupang/roberta-base-finetuned-sts-accuracy | 3 | null | transformers | 22,068 | Entry not found |
yy642/bert-base-uncased-finetuned-rte-max-length-512-epoch-5 | f52a3bb7daf4c8f95de08d6736aa35e8f5795555 | 2022-03-24T05:04:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-rte-max-length-512-epoch-5 | 3 | null | transformers | 22,069 | Entry not found |
rurupang/roberta-base-finetuned-sts-pearsonr_ | 72198f48bd1dd1b380107b056598b0c347bd48fb | 2022-03-24T14:09:46.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | rurupang | null | rurupang/roberta-base-finetuned-sts-pearsonr_ | 3 | null | transformers | 22,070 | Entry not found |
Helsinki-NLP/opus-mt-tc-big-zle-fr | 6a234a4b0aa1de076411b6a7573bfc1878cfb253 | 2022-06-01T13:09:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"fr",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zle-fr | 3 | null | transformers | 22,071 | ---
language:
- be
- fr
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zle-fr
results:
- task:
name: Translation bel-fra
type: translation
args: bel-fra
dataset:
name: tatoeba-test-v2020-07-28-v2021-08-07
type: tatoeba_mt
args: bel-fra
metrics:
- name: BLEU
type: bleu
value: 46.4
- task:
name: Translation multi-fra
type: translation
args: multi-fra
dataset:
name: tatoeba-test-v2020-07-28-v2021-08-07
type: tatoeba_mt
args: multi-fra
metrics:
- name: BLEU
type: bleu
value: 52.4
- task:
name: Translation rus-fra
type: translation
args: rus-fra
dataset:
name: tatoeba-test-v2020-07-28-v2021-08-07
type: tatoeba_mt
args: rus-fra
metrics:
- name: BLEU
type: bleu
value: 51.8
- task:
name: Translation ukr-fra
type: translation
args: ukr-fra
dataset:
name: tatoeba-test-v2020-07-28-v2021-08-07
type: tatoeba_mt
args: ukr-fra
metrics:
- name: BLEU
type: bleu
value: 50.7
- task:
name: Translation rus-fra
type: translation
args: rus-fra
dataset:
name: newstest2012
type: wmt-2012-news
args: rus-fra
metrics:
- name: BLEU
type: bleu
value: 25.3
- task:
name: Translation rus-fra
type: translation
args: rus-fra
dataset:
name: newstest2013
type: wmt-2013-news
args: rus-fra
metrics:
- name: BLEU
type: bleu
value: 29.7
---
# opus-mt-tc-big-zle-fr
Neural machine translation model for translating from East Slavic languages (zle) to French (fr).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-23
* source language(s): bel rus ukr
* target language(s): fra
* model: transformer-big
* data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fra/opusTCv20210807_transformer-big_2022-03-23.zip)
* more information released models: [OPUS-MT zle-fra README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-fra/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Подавай блюдо на тарелке.",
"Операція не може чекати."
]
model_name = "pytorch-models/opus-mt-tc-big-zle-fr"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Servez le plat dans l'assiette.
# L'opération ne peut pas attendre.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-fr")
print(pipe("Подавай блюдо на тарелке."))
# expected output: Servez le plat dans l'assiette.
```
## Benchmarks
* test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fra/opusTCv20210807_transformer-big_2022-03-23.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fra/opusTCv20210807_transformer-big_2022-03-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bel-fra | tatoeba-test-v2020-07-28-v2021-08-07 | 0.65415 | 46.4 | 283 | 2005 |
| multi-fra | tatoeba-test-v2020-07-28-v2021-08-07 | 0.68422 | 52.4 | 10000 | 66671 |
| rus-fra | tatoeba-test-v2020-07-28-v2021-08-07 | 0.68699 | 51.8 | 11490 | 80573 |
| ukr-fra | tatoeba-test-v2020-07-28-v2021-08-07 | 0.67887 | 50.7 | 10035 | 63222 |
| rus-fra | newstest2012 | 0.53679 | 25.3 | 3003 | 78011 |
| rus-fra | newstest2013 | 0.56211 | 29.7 | 3000 | 70037 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Wed Mar 23 22:45:20 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-zle-pt | af6ba1dfd8770924e304c76a4b255a9759af936d | 2022-06-01T13:07:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pt",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zle-pt | 3 | null | transformers | 22,072 | ---
language:
- pt
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zle-pt
results:
- task:
name: Translation rus-por
type: translation
args: rus-por
dataset:
name: flores101-devtest
type: flores_101
args: rus por devtest
metrics:
- name: BLEU
type: bleu
value: 31.9
- task:
name: Translation ukr-por
type: translation
args: ukr-por
dataset:
name: flores101-devtest
type: flores_101
args: ukr por devtest
metrics:
- name: BLEU
type: bleu
value: 33.6
- task:
name: Translation rus-por
type: translation
args: rus-por
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-por
metrics:
- name: BLEU
type: bleu
value: 42.8
- task:
name: Translation ukr-por
type: translation
args: ukr-por
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-por
metrics:
- name: BLEU
type: bleu
value: 45.2
---
# opus-mt-tc-big-zle-pt
Neural machine translation model for translating from East Slavic languages (zle) to Portuguese (pt).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-23
* source language(s): rus ukr
* target language(s): por
* model: transformer-big
* data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-por/opusTCv20210807_transformer-big_2022-03-23.zip)
* more information released models: [OPUS-MT zle-por README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-por/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>por<< Я маленькая.",
">>por<< Я войду первым."
]
model_name = "pytorch-models/opus-mt-tc-big-zle-pt"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Sou pequena.
# Eu entro primeiro.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-pt")
print(pipe(">>por<< Я маленькая."))
# expected output: Sou pequena.
```
## Benchmarks
* test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-por/opusTCv20210807_transformer-big_2022-03-23.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-por/opusTCv20210807_transformer-big_2022-03-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| rus-por | tatoeba-test-v2021-08-07 | 0.63749 | 42.8 | 10000 | 74713 |
| ukr-por | tatoeba-test-v2021-08-07 | 0.65288 | 45.2 | 3372 | 21315 |
| bel-por | flores101-devtest | 0.48481 | 16.2 | 1012 | 26519 |
| rus-por | flores101-devtest | 0.58567 | 31.9 | 1012 | 26519 |
| ukr-por | flores101-devtest | 0.59378 | 33.6 | 1012 | 26519 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Wed Mar 23 23:45:22 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-zle-zls | f594ac529d02042db4f283494507b50a56e7dbd9 | 2022-06-01T13:09:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"bg",
"hr",
"ru",
"sh",
"sl",
"sr_Cyrl",
"sr_Latn",
"uk",
"zle",
"zls",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zle-zls | 3 | null | transformers | 22,073 | ---
language:
- be
- bg
- hr
- ru
- sh
- sl
- sr_Cyrl
- sr_Latn
- uk
- zle
- zls
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zle-zls
results:
- task:
name: Translation rus-bul
type: translation
args: rus-bul
dataset:
name: flores101-devtest
type: flores_101
args: rus bul devtest
metrics:
- name: BLEU
type: bleu
value: 28.9
- task:
name: Translation rus-hrv
type: translation
args: rus-hrv
dataset:
name: flores101-devtest
type: flores_101
args: rus hrv devtest
metrics:
- name: BLEU
type: bleu
value: 23.2
- task:
name: Translation rus-mkd
type: translation
args: rus-mkd
dataset:
name: flores101-devtest
type: flores_101
args: rus mkd devtest
metrics:
- name: BLEU
type: bleu
value: 24.3
- task:
name: Translation rus-slv
type: translation
args: rus-slv
dataset:
name: flores101-devtest
type: flores_101
args: rus slv devtest
metrics:
- name: BLEU
type: bleu
value: 23.1
- task:
name: Translation rus-srp_Cyrl
type: translation
args: rus-srp_Cyrl
dataset:
name: flores101-devtest
type: flores_101
args: rus srp_Cyrl devtest
metrics:
- name: BLEU
type: bleu
value: 24.1
- task:
name: Translation ukr-bul
type: translation
args: ukr-bul
dataset:
name: flores101-devtest
type: flores_101
args: ukr bul devtest
metrics:
- name: BLEU
type: bleu
value: 30.8
- task:
name: Translation ukr-hrv
type: translation
args: ukr-hrv
dataset:
name: flores101-devtest
type: flores_101
args: ukr hrv devtest
metrics:
- name: BLEU
type: bleu
value: 24.6
- task:
name: Translation ukr-mkd
type: translation
args: ukr-mkd
dataset:
name: flores101-devtest
type: flores_101
args: ukr mkd devtest
metrics:
- name: BLEU
type: bleu
value: 26.2
- task:
name: Translation ukr-slv
type: translation
args: ukr-slv
dataset:
name: flores101-devtest
type: flores_101
args: ukr slv devtest
metrics:
- name: BLEU
type: bleu
value: 24.2
- task:
name: Translation ukr-srp_Cyrl
type: translation
args: ukr-srp_Cyrl
dataset:
name: flores101-devtest
type: flores_101
args: ukr srp_Cyrl devtest
metrics:
- name: BLEU
type: bleu
value: 26.2
- task:
name: Translation rus-bul
type: translation
args: rus-bul
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-bul
metrics:
- name: BLEU
type: bleu
value: 53.7
- task:
name: Translation rus-hbs
type: translation
args: rus-hbs
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-hbs
metrics:
- name: BLEU
type: bleu
value: 49.4
- task:
name: Translation rus-slv
type: translation
args: rus-slv
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-slv
metrics:
- name: BLEU
type: bleu
value: 21.5
- task:
name: Translation rus-srp_Cyrl
type: translation
args: rus-srp_Cyrl
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-srp_Cyrl
metrics:
- name: BLEU
type: bleu
value: 46.1
- task:
name: Translation rus-srp_Latn
type: translation
args: rus-srp_Latn
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-srp_Latn
metrics:
- name: BLEU
type: bleu
value: 51.7
- task:
name: Translation ukr-bul
type: translation
args: ukr-bul
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-bul
metrics:
- name: BLEU
type: bleu
value: 61.3
- task:
name: Translation ukr-hbs
type: translation
args: ukr-hbs
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-hbs
metrics:
- name: BLEU
type: bleu
value: 52.1
- task:
name: Translation ukr-hrv
type: translation
args: ukr-hrv
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-hrv
metrics:
- name: BLEU
type: bleu
value: 50.1
- task:
name: Translation ukr-srp_Cyrl
type: translation
args: ukr-srp_Cyrl
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-srp_Cyrl
metrics:
- name: BLEU
type: bleu
value: 54.7
- task:
name: Translation ukr-srp_Latn
type: translation
args: ukr-srp_Latn
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-srp_Latn
metrics:
- name: BLEU
type: bleu
value: 53.4
---
# opus-mt-tc-big-zle-zls
Neural machine translation model for translating from East Slavic languages (zle) to South Slavic languages (zls).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-23
* source language(s): bel rus ukr
* target language(s): bul hbs hrv slv srp_Cyrl srp_Latn
* valid target language labels: >>bul<< >>hbs<< >>hrv<< >>slv<< >>srp_Cyrl<< >>srp_Latn<<
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zls/opusTCv20210807+bt_transformer-big_2022-03-23.zip)
* more information released models: [OPUS-MT zle-zls README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-zls/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bul<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>bul<< Новы каранавірус вельмі заразны.",
">>srp_Latn<< Моє ім'я — Саллі."
]
model_name = "pytorch-models/opus-mt-tc-big-zle-zls"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Короната е силно заразна.
# Zovem se Sali.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-zls")
print(pipe(">>bul<< Новы каранавірус вельмі заразны."))
# expected output: Короната е силно заразна.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zls/opusTCv20210807+bt_transformer-big_2022-03-23.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zls/opusTCv20210807+bt_transformer-big_2022-03-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| rus-bul | tatoeba-test-v2021-08-07 | 0.71515 | 53.7 | 1247 | 8272 |
| rus-hbs | tatoeba-test-v2021-08-07 | 0.69192 | 49.4 | 2500 | 14736 |
| rus-slv | tatoeba-test-v2021-08-07 | 0.38051 | 21.5 | 657 | 3969 |
| rus-srp_Cyrl | tatoeba-test-v2021-08-07 | 0.66622 | 46.1 | 881 | 5407 |
| rus-srp_Latn | tatoeba-test-v2021-08-07 | 0.70990 | 51.7 | 1483 | 8552 |
| ukr-bul | tatoeba-test-v2021-08-07 | 0.77283 | 61.3 | 1020 | 5181 |
| ukr-hbs | tatoeba-test-v2021-08-07 | 0.69401 | 52.1 | 942 | 5130 |
| ukr-hrv | tatoeba-test-v2021-08-07 | 0.67202 | 50.1 | 389 | 2302 |
| ukr-srp_Cyrl | tatoeba-test-v2021-08-07 | 0.70064 | 54.7 | 205 | 1112 |
| ukr-srp_Latn | tatoeba-test-v2021-08-07 | 0.72405 | 53.4 | 348 | 1716 |
| bel-bul | flores101-devtest | 0.49528 | 16.1 | 1012 | 24700 |
| bel-hrv | flores101-devtest | 0.46308 | 12.4 | 1012 | 22423 |
| bel-mkd | flores101-devtest | 0.48608 | 13.5 | 1012 | 24314 |
| bel-slv | flores101-devtest | 0.44452 | 12.2 | 1012 | 23425 |
| bel-srp_Cyrl | flores101-devtest | 0.44424 | 12.6 | 1012 | 23456 |
| rus-bul | flores101-devtest | 0.58653 | 28.9 | 1012 | 24700 |
| rus-hrv | flores101-devtest | 0.53494 | 23.2 | 1012 | 22423 |
| rus-mkd | flores101-devtest | 0.55184 | 24.3 | 1012 | 24314 |
| rus-slv | flores101-devtest | 0.52201 | 23.1 | 1012 | 23425 |
| rus-srp_Cyrl | flores101-devtest | 0.53038 | 24.1 | 1012 | 23456 |
| ukr-bul | flores101-devtest | 0.59625 | 30.8 | 1012 | 24700 |
| ukr-hrv | flores101-devtest | 0.54530 | 24.6 | 1012 | 22423 |
| ukr-mkd | flores101-devtest | 0.56822 | 26.2 | 1012 | 24314 |
| ukr-slv | flores101-devtest | 0.53092 | 24.2 | 1012 | 23425 |
| ukr-srp_Cyrl | flores101-devtest | 0.54618 | 26.2 | 1012 | 23456 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 00:46:26 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-zle-zlw | ddb3434ec84147e0b03c663e9de130f589107914 | 2022-06-01T13:08:06.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"cs",
"pl",
"ru",
"uk",
"zle",
"zlw",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zle-zlw | 3 | null | transformers | 22,074 | ---
language:
- be
- cs
- pl
- ru
- uk
- zle
- zlw
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zle-zlw
results:
- task:
name: Translation rus-ces
type: translation
args: rus-ces
dataset:
name: flores101-devtest
type: flores_101
args: rus ces devtest
metrics:
- name: BLEU
type: bleu
value: 23.1
- task:
name: Translation ukr-ces
type: translation
args: ukr-ces
dataset:
name: flores101-devtest
type: flores_101
args: ukr ces devtest
metrics:
- name: BLEU
type: bleu
value: 25.1
- task:
name: Translation bel-pol
type: translation
args: bel-pol
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bel-pol
metrics:
- name: BLEU
type: bleu
value: 47.1
- task:
name: Translation rus-ces
type: translation
args: rus-ces
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-ces
metrics:
- name: BLEU
type: bleu
value: 53.4
- task:
name: Translation rus-pol
type: translation
args: rus-pol
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-pol
metrics:
- name: BLEU
type: bleu
value: 53.7
- task:
name: Translation ukr-ces
type: translation
args: ukr-ces
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-ces
metrics:
- name: BLEU
type: bleu
value: 58.0
- task:
name: Translation ukr-pol
type: translation
args: ukr-pol
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-pol
metrics:
- name: BLEU
type: bleu
value: 57.0
- task:
name: Translation rus-ces
type: translation
args: rus-ces
dataset:
name: newstest2013
type: wmt-2013-news
args: rus-ces
metrics:
- name: BLEU
type: bleu
value: 26.0
---
# opus-mt-tc-big-zle-zlw
Neural machine translation model for translating from East Slavic languages (zle) to West Slavic languages (zlw).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-23
* source language(s): bel rus ukr
* target language(s): ces pol
* valid target language labels: >>ces<< >>pol<<
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zlw/opusTCv20210807+bt_transformer-big_2022-03-23.zip)
* more information released models: [OPUS-MT zle-zlw README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-zlw/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>ces<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>pol<< Это метафора.",
">>pol<< Что вы делали?"
]
model_name = "pytorch-models/opus-mt-tc-big-zle-zlw"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# To metafora.
# Co robiliście?
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-zlw")
print(pipe(">>pol<< Это метафора."))
# expected output: To metafora.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zlw/opusTCv20210807+bt_transformer-big_2022-03-23.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zlw/opusTCv20210807+bt_transformer-big_2022-03-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bel-pol | tatoeba-test-v2021-08-07 | 0.65517 | 47.1 | 287 | 1706 |
| rus-ces | tatoeba-test-v2021-08-07 | 0.69695 | 53.4 | 2934 | 16831 |
| rus-pol | tatoeba-test-v2021-08-07 | 0.72176 | 53.7 | 3543 | 21505 |
| ukr-ces | tatoeba-test-v2021-08-07 | 0.73149 | 58.0 | 1787 | 8550 |
| ukr-pol | tatoeba-test-v2021-08-07 | 0.74649 | 57.0 | 2519 | 13201 |
| bel-ces | flores101-devtest | 0.41248 | 11.1 | 1012 | 22101 |
| bel-pol | flores101-devtest | 0.42240 | 10.2 | 1012 | 22520 |
| rus-ces | flores101-devtest | 0.50971 | 23.1 | 1012 | 22101 |
| rus-pol | flores101-devtest | 0.48672 | 18.4 | 1012 | 22520 |
| ukr-ces | flores101-devtest | 0.52482 | 25.1 | 1012 | 22101 |
| ukr-pol | flores101-devtest | 0.48790 | 18.8 | 1012 | 22520 |
| rus-ces | newstest2012 | 0.45834 | 18.8 | 3003 | 65456 |
| rus-ces | newstest2013 | 0.52364 | 26.0 | 3000 | 57250 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 00:50:29 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-gmq-zle | ccd7a6cf94e0f8bb143ec00baf448f36f2847e93 | 2022-06-01T13:08:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tc",
"big",
"gmq",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-gmq-zle | 3 | null | transformers | 22,075 | ---
language:
- da
- gmq
- is
- nb
- false
- ru
- sv
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-gmq-zle
results:
- task:
name: Translation dan-rus
type: translation
args: dan-rus
dataset:
name: flores101-devtest
type: flores_101
args: dan rus devtest
metrics:
- name: BLEU
type: bleu
value: 25.6
- task:
name: Translation dan-ukr
type: translation
args: dan-ukr
dataset:
name: flores101-devtest
type: flores_101
args: dan ukr devtest
metrics:
- name: BLEU
type: bleu
value: 25.5
- task:
name: Translation nob-rus
type: translation
args: nob-rus
dataset:
name: flores101-devtest
type: flores_101
args: nob rus devtest
metrics:
- name: BLEU
type: bleu
value: 22.1
- task:
name: Translation nob-ukr
type: translation
args: nob-ukr
dataset:
name: flores101-devtest
type: flores_101
args: nob ukr devtest
metrics:
- name: BLEU
type: bleu
value: 21.6
- task:
name: Translation swe-rus
type: translation
args: swe-rus
dataset:
name: flores101-devtest
type: flores_101
args: swe rus devtest
metrics:
- name: BLEU
type: bleu
value: 25.8
- task:
name: Translation swe-ukr
type: translation
args: swe-ukr
dataset:
name: flores101-devtest
type: flores_101
args: swe ukr devtest
metrics:
- name: BLEU
type: bleu
value: 25.7
- task:
name: Translation dan-rus
type: translation
args: dan-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: dan-rus
metrics:
- name: BLEU
type: bleu
value: 53.9
- task:
name: Translation nob-rus
type: translation
args: nob-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: nob-rus
metrics:
- name: BLEU
type: bleu
value: 45.8
- task:
name: Translation swe-rus
type: translation
args: swe-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: swe-rus
metrics:
- name: BLEU
type: bleu
value: 45.9
---
# opus-mt-tc-big-gmq-zle
Neural machine translation model for translating from North Germanic languages (gmq) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-23
* source language(s): dan isl nob nor swe
* target language(s): rus ukr
* valid target language labels: >>rus<< >>ukr<<
* model: transformer-big
* data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pbt_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-zle/opusTCv20210807+pbt_transformer-big_2022-03-23.zip)
* more information released models: [OPUS-MT gmq-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-zle/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>rus<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>bel<< Det er allerede torsdag i morgen.",
">>ukr<< Tom lekte katt och råtta med Mary."
]
model_name = "pytorch-models/opus-mt-tc-big-gmq-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Гэта ўжо чацвер заўтра.
# Том грав кішку і щура з Марією.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-gmq-zle")
print(pipe(">>bel<< Det er allerede torsdag i morgen."))
# expected output: Гэта ўжо чацвер заўтра.
```
## Benchmarks
* test set translations: [opusTCv20210807+pbt_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-zle/opusTCv20210807+pbt_transformer-big_2022-03-23.test.txt)
* test set scores: [opusTCv20210807+pbt_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-zle/opusTCv20210807+pbt_transformer-big_2022-03-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| dan-rus | tatoeba-test-v2021-08-07 | 0.72627 | 53.9 | 1713 | 10480 |
| nob-rus | tatoeba-test-v2021-08-07 | 0.66881 | 45.8 | 1277 | 10659 |
| swe-rus | tatoeba-test-v2021-08-07 | 0.66248 | 45.9 | 1282 | 7659 |
| dan-rus | flores101-devtest | 0.53271 | 25.6 | 1012 | 23295 |
| dan-ukr | flores101-devtest | 0.54273 | 25.5 | 1012 | 22810 |
| nob-rus | flores101-devtest | 0.50426 | 22.1 | 1012 | 23295 |
| nob-ukr | flores101-devtest | 0.51156 | 21.6 | 1012 | 22810 |
| swe-rus | flores101-devtest | 0.53226 | 25.8 | 1012 | 23295 |
| swe-ukr | flores101-devtest | 0.54257 | 25.7 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 02:08:53 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-pt-zle | c6cc16bcc2ef18fbe47c4b9cabcba106307af0bf | 2022-06-01T13:04:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pt",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-pt-zle | 3 | null | transformers | 22,076 | ---
language:
- pt
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-pt-zle
results:
- task:
name: Translation por-rus
type: translation
args: por-rus
dataset:
name: flores101-devtest
type: flores_101
args: por rus devtest
metrics:
- name: BLEU
type: bleu
value: 26.8
- task:
name: Translation por-ukr
type: translation
args: por-ukr
dataset:
name: flores101-devtest
type: flores_101
args: por ukr devtest
metrics:
- name: BLEU
type: bleu
value: 25.1
- task:
name: Translation por-rus
type: translation
args: por-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: por-rus
metrics:
- name: BLEU
type: bleu
value: 47.6
- task:
name: Translation por-ukr
type: translation
args: por-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: por-ukr
metrics:
- name: BLEU
type: bleu
value: 44.7
---
# opus-mt-tc-big-pt-zle
Neural machine translation model for translating from Portuguese (pt) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-23
* source language(s): por
* target language(s): rus ukr
* valid target language labels: >>rus<< >>ukr<<
* model: transformer-big
* data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-zle/opusTCv20210807_transformer-big_2022-03-23.zip)
* more information released models: [OPUS-MT por-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-zle/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>rus<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>ukr<< Esse é o meu lugar.",
">>rus<< Tom tem problemas de saúde."
]
model_name = "pytorch-models/opus-mt-tc-big-pt-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Це моє місце.
# У Тома проблемы со здоровьем.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-pt-zle")
print(pipe(">>ukr<< Esse é o meu lugar."))
# expected output: Це моє місце.
```
## Benchmarks
* test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-zle/opusTCv20210807_transformer-big_2022-03-23.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-zle/opusTCv20210807_transformer-big_2022-03-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| por-rus | tatoeba-test-v2021-08-07 | 0.67980 | 47.6 | 10000 | 65326 |
| por-ukr | tatoeba-test-v2021-08-07 | 0.65867 | 44.7 | 3372 | 18933 |
| por-rus | flores101-devtest | 0.54675 | 26.8 | 1012 | 23295 |
| por-ukr | flores101-devtest | 0.53690 | 25.1 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 03:20:20 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-es-zle | 37bf43a8b98a85d2693a80926f0092e8a3b0698d | 2022-06-01T13:04:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"es",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-es-zle | 3 | null | transformers | 22,077 | ---
language:
- be
- es
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-es-zle
results:
- task:
name: Translation spa-rus
type: translation
args: spa-rus
dataset:
name: flores101-devtest
type: flores_101
args: spa rus devtest
metrics:
- name: BLEU
type: bleu
value: 20.2
- task:
name: Translation spa-bel
type: translation
args: spa-bel
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: spa-bel
metrics:
- name: BLEU
type: bleu
value: 27.5
- task:
name: Translation spa-rus
type: translation
args: spa-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: spa-rus
metrics:
- name: BLEU
type: bleu
value: 49.0
- task:
name: Translation spa-ukr
type: translation
args: spa-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: spa-ukr
metrics:
- name: BLEU
type: bleu
value: 42.3
- task:
name: Translation spa-rus
type: translation
args: spa-rus
dataset:
name: newstest2012
type: wmt-2012-news
args: spa-rus
metrics:
- name: BLEU
type: bleu
value: 24.6
- task:
name: Translation spa-rus
type: translation
args: spa-rus
dataset:
name: newstest2013
type: wmt-2013-news
args: spa-rus
metrics:
- name: BLEU
type: bleu
value: 26.9
---
# opus-mt-tc-big-es-zle
Neural machine translation model for translating from Spanish (es) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-23
* source language(s): spa
* target language(s): bel rus ukr
* valid target language labels: >>bel<< >>rus<< >>ukr<<
* model: transformer-big
* data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zle/opusTCv20210807_transformer-big_2022-03-23.zip)
* more information released models: [OPUS-MT spa-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-zle/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>rus<< Su novela se vendió bien.",
">>ukr<< Quiero ir a Corea del Norte."
]
model_name = "pytorch-models/opus-mt-tc-big-es-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Его роман хорошо продавался.
# Я хочу поїхати до Північної Кореї.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-es-zle")
print(pipe(">>rus<< Su novela se vendió bien."))
# expected output: Его роман хорошо продавался.
```
## Benchmarks
* test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zle/opusTCv20210807_transformer-big_2022-03-23.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zle/opusTCv20210807_transformer-big_2022-03-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| spa-bel | tatoeba-test-v2021-08-07 | 0.54506 | 27.5 | 205 | 1259 |
| spa-rus | tatoeba-test-v2021-08-07 | 0.68523 | 49.0 | 10506 | 69242 |
| spa-ukr | tatoeba-test-v2021-08-07 | 0.63502 | 42.3 | 10115 | 54544 |
| spa-rus | flores101-devtest | 0.49913 | 20.2 | 1012 | 23295 |
| spa-ukr | flores101-devtest | 0.47772 | 17.4 | 1012 | 22810 |
| spa-rus | newstest2012 | 0.52436 | 24.6 | 3003 | 64790 |
| spa-rus | newstest2013 | 0.54249 | 26.9 | 3000 | 58560 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 03:35:13 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-zlw-zle | 308095a5a21cae82c8b1acd65410451c88f80af2 | 2022-06-01T13:02:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"cs",
"dsb",
"hsb",
"pl",
"ru",
"uk",
"zle",
"zlw",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zlw-zle | 3 | null | transformers | 22,078 | ---
language:
- be
- cs
- dsb
- hsb
- pl
- ru
- uk
- zle
- zlw
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zlw-zle
results:
- task:
name: Translation ces-rus
type: translation
args: ces-rus
dataset:
name: flores101-devtest
type: flores_101
args: ces rus devtest
metrics:
- name: BLEU
type: bleu
value: 24.2
- task:
name: Translation ces-ukr
type: translation
args: ces-ukr
dataset:
name: flores101-devtest
type: flores_101
args: ces ukr devtest
metrics:
- name: BLEU
type: bleu
value: 22.9
- task:
name: Translation pol-rus
type: translation
args: pol-rus
dataset:
name: flores101-devtest
type: flores_101
args: pol rus devtest
metrics:
- name: BLEU
type: bleu
value: 20.1
- task:
name: Translation ces-rus
type: translation
args: ces-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ces-rus
metrics:
- name: BLEU
type: bleu
value: 56.4
- task:
name: Translation ces-ukr
type: translation
args: ces-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ces-ukr
metrics:
- name: BLEU
type: bleu
value: 53.0
- task:
name: Translation pol-bel
type: translation
args: pol-bel
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: pol-bel
metrics:
- name: BLEU
type: bleu
value: 29.4
- task:
name: Translation pol-rus
type: translation
args: pol-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: pol-rus
metrics:
- name: BLEU
type: bleu
value: 55.3
- task:
name: Translation pol-ukr
type: translation
args: pol-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: pol-ukr
metrics:
- name: BLEU
type: bleu
value: 48.6
- task:
name: Translation ces-rus
type: translation
args: ces-rus
dataset:
name: newstest2012
type: wmt-2012-news
args: ces-rus
metrics:
- name: BLEU
type: bleu
value: 21.0
- task:
name: Translation ces-rus
type: translation
args: ces-rus
dataset:
name: newstest2013
type: wmt-2013-news
args: ces-rus
metrics:
- name: BLEU
type: bleu
value: 27.2
---
# opus-mt-tc-big-zlw-zle
Neural machine translation model for translating from West Slavic languages (zlw) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-19
* source language(s): ces dsb hsb pol
* target language(s): bel rus ukr
* valid target language labels: >>bel<< >>rus<< >>ukr<<
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-19.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zle/opusTCv20210807+bt_transformer-big_2022-03-19.zip)
* more information released models: [OPUS-MT zlw-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-zle/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>rus<< Je vystudovaný právník.",
">>rus<< Gdzie jest moja książka ?"
]
model_name = "pytorch-models/opus-mt-tc-big-zlw-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Он дипломированный юрист.
# Где моя книга?
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zlw-zle")
print(pipe(">>rus<< Je vystudovaný právník."))
# expected output: Он дипломированный юрист.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-19.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zle/opusTCv20210807+bt_transformer-big_2022-03-19.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-19.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zle/opusTCv20210807+bt_transformer-big_2022-03-19.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ces-rus | tatoeba-test-v2021-08-07 | 0.73154 | 56.4 | 2934 | 17790 |
| ces-ukr | tatoeba-test-v2021-08-07 | 0.69934 | 53.0 | 1787 | 8891 |
| pol-bel | tatoeba-test-v2021-08-07 | 0.51039 | 29.4 | 287 | 1730 |
| pol-rus | tatoeba-test-v2021-08-07 | 0.73156 | 55.3 | 3543 | 22067 |
| pol-ukr | tatoeba-test-v2021-08-07 | 0.68247 | 48.6 | 2519 | 13535 |
| ces-rus | flores101-devtest | 0.52316 | 24.2 | 1012 | 23295 |
| ces-ukr | flores101-devtest | 0.52261 | 22.9 | 1012 | 22810 |
| pol-rus | flores101-devtest | 0.49414 | 20.1 | 1012 | 23295 |
| pol-ukr | flores101-devtest | 0.48250 | 18.3 | 1012 | 22810 |
| ces-rus | newstest2012 | 0.49469 | 21.0 | 3003 | 64790 |
| ces-rus | newstest2013 | 0.54197 | 27.2 | 3000 | 58560 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 04:13:23 EET 2022
* port machine: LM0-400-22516.local
|
huggingtweets/melindagates | ac24a3e0836d808d23bc4505c0680d710892c406 | 2022-03-24T13:28:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/melindagates | 3 | null | transformers | 22,079 | ---
language: en
thumbnail: http://www.huggingtweets.com/melindagates/1648128524647/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1054713372845862912/1SR434Pr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Melinda French Gates</div>
<div style="text-align: center; font-size: 14px;">@melindagates</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Melinda French Gates.
| Data | Melinda French Gates |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 231 |
| Short tweets | 2 |
| Tweets kept | 3017 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39nn0ehw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @melindagates's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2xcx4bfy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2xcx4bfy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/melindagates')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
socialmediaie/bertweet-base_wnut17_ner | 1e8a9b197911a2c8a36fd960dc312809d0c108bd | 2022-04-01T16:30:20.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:wnut_17",
"transformers",
"generated_from_trainer",
"named-entity-recognition",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | socialmediaie | null | socialmediaie/bertweet-base_wnut17_ner | 3 | null | transformers | 22,080 | ---
license: apache-2.0
tags:
- generated_from_trainer
- named-entity-recognition
- token-classification
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: fine_tune_bertweet-base-lp-ft
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
args: semval
metrics:
- name: Precision
type: precision
value: 0.6154830454254638
- name: Recall
type: recall
value: 0.49844559585492226
- name: F1
type: f1
value: 0.5508159175493844
- name: Accuracy
type: accuracy
value: 0.9499198834668608
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bertweet-base finetuned on wnut17_ner
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the [wnut_17](https://huggingface.co/datasets/wnut_17) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3376
- Overall Precision: 0.6803
- Overall Recall: 0.6096
- Overall F1: 0.6430
- Overall Accuracy: 0.9509
- Corporation F1: 0.2975
- Creative-work F1: 0.4436
- Group F1: 0.3624
- Location F1: 0.6834
- Person F1: 0.7902
- Product F1: 0.3887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Corporation F1 | Creative-work F1 | Group F1 | Location F1 | Person F1 | Product F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:--------------:|:----------------:|:--------:|:-----------:|:---------:|:----------:|
| 0.0215 | 1.0 | 213 | 0.2913 | 0.7026 | 0.5905 | 0.6417 | 0.9507 | 0.2832 | 0.4444 | 0.2975 | 0.6854 | 0.7788 | 0.4015 |
| 0.0213 | 2.0 | 426 | 0.3052 | 0.6774 | 0.5772 | 0.6233 | 0.9495 | 0.2830 | 0.3483 | 0.3231 | 0.6857 | 0.7728 | 0.3794 |
| 0.0288 | 3.0 | 639 | 0.3378 | 0.7061 | 0.5507 | 0.6188 | 0.9467 | 0.3077 | 0.4184 | 0.3529 | 0.6222 | 0.7532 | 0.3910 |
| 0.0124 | 4.0 | 852 | 0.2712 | 0.6574 | 0.6121 | 0.6340 | 0.9502 | 0.3077 | 0.4842 | 0.3167 | 0.6809 | 0.7735 | 0.3986 |
| 0.0208 | 5.0 | 1065 | 0.2905 | 0.7108 | 0.6063 | 0.6544 | 0.9518 | 0.3063 | 0.4286 | 0.3419 | 0.7052 | 0.7913 | 0.4223 |
| 0.0071 | 6.0 | 1278 | 0.3189 | 0.6756 | 0.5847 | 0.6269 | 0.9494 | 0.2759 | 0.4380 | 0.3256 | 0.6744 | 0.7781 | 0.3779 |
| 0.0073 | 7.0 | 1491 | 0.3593 | 0.7330 | 0.5540 | 0.6310 | 0.9476 | 0.3061 | 0.4388 | 0.3784 | 0.6946 | 0.7631 | 0.3374 |
| 0.0135 | 8.0 | 1704 | 0.3564 | 0.6875 | 0.5482 | 0.6100 | 0.9471 | 0.34 | 0.4179 | 0.3088 | 0.6632 | 0.7486 | 0.3695 |
| 0.0097 | 9.0 | 1917 | 0.3085 | 0.6598 | 0.6395 | 0.6495 | 0.9516 | 0.3111 | 0.4609 | 0.3836 | 0.7090 | 0.7906 | 0.4083 |
| 0.0108 | 10.0 | 2130 | 0.3045 | 0.6605 | 0.6478 | 0.6541 | 0.9509 | 0.3529 | 0.4580 | 0.3649 | 0.6897 | 0.7843 | 0.4387 |
| 0.013 | 11.0 | 2343 | 0.3383 | 0.6788 | 0.6179 | 0.6470 | 0.9507 | 0.2783 | 0.4248 | 0.3358 | 0.7368 | 0.7958 | 0.3655 |
| 0.0076 | 12.0 | 2556 | 0.3617 | 0.6920 | 0.5523 | 0.6143 | 0.9474 | 0.2708 | 0.3985 | 0.3333 | 0.6740 | 0.7566 | 0.3525 |
| 0.0042 | 13.0 | 2769 | 0.3747 | 0.6896 | 0.5664 | 0.6220 | 0.9473 | 0.2478 | 0.3915 | 0.3521 | 0.6561 | 0.7742 | 0.3539 |
| 0.0049 | 14.0 | 2982 | 0.3376 | 0.6803 | 0.6096 | 0.6430 | 0.9509 | 0.2975 | 0.4436 | 0.3624 | 0.6834 | 0.7902 | 0.3887 |
### Overall results
| metric_type | train | validation | test |
|:-------------------|-----------:|-----------:|-----------:|
| loss | 0.012030 | 0.271155 | 0.273943 |
| runtime | 16.292400 | 5.068800 | 8.596800 |
| samples_per_second | 208.318000 | 199.060000 | 149.707000 |
| steps_per_second | 13.074000 | 12.626000 | 9.422000 |
| corporation_f1 | 0.936877 | 0.307692 | 0.368627 |
| person_f1 | 0.984252 | 0.773455 | 0.689826 |
| product_f1 | 0.893246 | 0.398625 | 0.270423 |
| creative-work_f1 | 0.880562 | 0.484211 | 0.415274 |
| group_f1 | 0.975547 | 0.316667 | 0.411348 |
| location_f1 | 0.978887 | 0.680851 | 0.638695 |
| overall_accuracy | 0.997709 | 0.950244 | 0.949920 |
| overall_f1 | 0.961113 | 0.633978 | 0.550816 |
| overall_precision | 0.956337 | 0.657449 | 0.615483 |
| overall_recall | 0.965938 | 0.612126 | 0.498446 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
yy642/bert-base-uncased-finetuned-mnli-max-length-256-epoch-10 | 1b078217e13aec72dd93a1d0a2d69f1c69ad1d99 | 2022-03-25T07:13:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-mnli-max-length-256-epoch-10 | 3 | null | transformers | 22,081 | Entry not found |
eliasws/openApiT5-distilled-description-v3 | 7d9b64aaa539e7a1e30c247941a1e0674010d53f | 2022-03-25T09:30:37.000Z | [
"pytorch",
"t5",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | eliasws | null | eliasws/openApiT5-distilled-description-v3 | 3 | null | sentence-transformers | 22,082 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5547 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1109,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
eliasws/openApiT5-to-json-v3 | d1cf4b2968aed2bb60499409f26378b719b9fece | 2022-03-25T10:33:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eliasws | null | eliasws/openApiT5-to-json-v3 | 3 | null | transformers | 22,083 | Entry not found |
mimicheng/codeparrot-ds-sample-2ep | c634224178285444b1f1985b042a241c39990cef | 2022-03-26T12:51:09.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | mimicheng | null | mimicheng/codeparrot-ds-sample-2ep | 3 | null | transformers | 22,084 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample-2ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample-2ep
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: tpu
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2562 | 1.86 | 5000 | 1.3782 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
yy642/bert-base-uncased-finetuned-mnli-512-5 | e3e8f6197a40ae78eef90e293a6b3f8b4450f7b6 | 2022-03-26T09:17:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-mnli-512-5 | 3 | null | transformers | 22,085 | Entry not found |
scasutt/wav2vec2-large-xlsr-53_toy_train_data_augment_0.1 | c39376acbfd8be0d86c44b12a0f67597fb604d55 | 2022-03-27T17:07:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-large-xlsr-53_toy_train_data_augment_0.1 | 3 | null | transformers | 22,086 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_augment_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_augment_0.1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4658
- Wer: 0.5037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.447 | 1.05 | 250 | 3.3799 | 1.0 |
| 3.089 | 2.1 | 500 | 3.4868 | 1.0 |
| 3.063 | 3.15 | 750 | 3.3155 | 1.0 |
| 2.4008 | 4.2 | 1000 | 1.2934 | 0.8919 |
| 1.618 | 5.25 | 1250 | 0.7847 | 0.7338 |
| 1.3038 | 6.3 | 1500 | 0.6459 | 0.6712 |
| 1.2074 | 7.35 | 1750 | 0.5705 | 0.6269 |
| 1.1062 | 8.4 | 2000 | 0.5267 | 0.5843 |
| 1.026 | 9.45 | 2250 | 0.5108 | 0.5683 |
| 0.9505 | 10.5 | 2500 | 0.5066 | 0.5568 |
| 0.893 | 11.55 | 2750 | 0.5161 | 0.5532 |
| 0.8535 | 12.6 | 3000 | 0.4994 | 0.5341 |
| 0.8462 | 13.65 | 3250 | 0.4626 | 0.5262 |
| 0.8334 | 14.7 | 3500 | 0.4593 | 0.5197 |
| 0.842 | 15.75 | 3750 | 0.4651 | 0.5126 |
| 0.7678 | 16.81 | 4000 | 0.4687 | 0.5120 |
| 0.7873 | 17.86 | 4250 | 0.4716 | 0.5070 |
| 0.7486 | 18.91 | 4500 | 0.4657 | 0.5033 |
| 0.7073 | 19.96 | 4750 | 0.4658 | 0.5037 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
KamrusSamad/tiny2 | 703424eafd43916f718b12cca869d565b686ddd3 | 2022-03-25T20:03:30.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KamrusSamad | null | KamrusSamad/tiny2 | 3 | null | transformers | 22,087 | Entry not found |
yy642/bert-base-uncased-finetuned-mnli-max-length-256-epoch-10-v2 | 5d992be3cdd2ee4c6a544e6dcf44af1607759ece | 2022-03-26T22:08:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-mnli-max-length-256-epoch-10-v2 | 3 | null | transformers | 22,088 | Entry not found |
l3cube-pune/hing-roberta-mixed | edcf1253eb98eae8b0774a281443ba9d0ee06290 | 2022-06-26T15:12:30.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"hi",
"en",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"transformers",
"codemix",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | l3cube-pune | null | l3cube-pune/hing-roberta-mixed | 3 | null | transformers | 22,089 | ---
license: cc-by-4.0
language:
- hi
- en
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingRoBERTa-Mixed
HingRoBERTa-Mixed is a Hindi-English code-mixed BERT model trained on roman + devanagari text. It is a xlm-RoBERTa model fine-tuned on mixed script L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
```
@InProceedings{nayak-joshi:2022:WILDRE6,
author = {Nayak, Ravindra and Joshi, Raviraj},
title = {L3Cube-HingCorpus and HingBERT: A Code Mixed Hindi-English Dataset and BERT Language Models},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {7--12}
}
``` |
scasutt/wav2vec2-base_toy_train_data_augmented | ffd44ac28a9c9938b2514a123856dd44a400548b | 2022-03-26T10:09:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-base_toy_train_data_augmented | 3 | null | transformers | 22,090 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_augmented
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0238
- Wer: 0.6969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.12 | 1.05 | 250 | 3.3998 | 0.9982 |
| 3.0727 | 2.1 | 500 | 3.1261 | 0.9982 |
| 1.9729 | 3.15 | 750 | 1.4868 | 0.9464 |
| 1.3213 | 4.2 | 1000 | 1.2598 | 0.8833 |
| 1.0508 | 5.25 | 1250 | 1.0014 | 0.8102 |
| 0.8483 | 6.3 | 1500 | 0.9475 | 0.7944 |
| 0.7192 | 7.35 | 1750 | 0.9493 | 0.7686 |
| 0.6447 | 8.4 | 2000 | 0.9872 | 0.7573 |
| 0.6064 | 9.45 | 2250 | 0.9587 | 0.7447 |
| 0.5384 | 10.5 | 2500 | 0.9332 | 0.7320 |
| 0.4985 | 11.55 | 2750 | 0.9926 | 0.7315 |
| 0.4643 | 12.6 | 3000 | 1.0008 | 0.7292 |
| 0.4565 | 13.65 | 3250 | 0.9522 | 0.7171 |
| 0.449 | 14.7 | 3500 | 0.9685 | 0.7140 |
| 0.4307 | 15.75 | 3750 | 1.0080 | 0.7077 |
| 0.4239 | 16.81 | 4000 | 0.9950 | 0.7023 |
| 0.389 | 17.86 | 4250 | 1.0260 | 0.7007 |
| 0.3471 | 18.91 | 4500 | 1.0012 | 0.6966 |
| 0.3276 | 19.96 | 4750 | 1.0238 | 0.6969 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
qc7/shad_ml2_transformer | 068eb5c84043aff2c08a51d5401b57d9360d21db | 2022-03-27T09:52:48.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:unlicense"
] | text-classification | false | qc7 | null | qc7/shad_ml2_transformer | 3 | null | transformers | 22,091 | ---
license: unlicense
---
|
eliasws/openApiT5-labeled-v2 | a570b2c1b99ec65bc16b7c0896c28b23702a271b | 2022-03-26T15:41:46.000Z | [
"pytorch",
"t5",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | eliasws | null | eliasws/openApiT5-labeled-v2 | 3 | null | sentence-transformers | 22,092 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 20250 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 8100,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 8100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Jiexing/sparc_relation_t5_3b-2112 | 5d60d47872fc9d5bb40161db8b780dfe6b895705 | 2022-03-27T14:09:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Jiexing | null | Jiexing/sparc_relation_t5_3b-2112 | 3 | null | transformers | 22,093 | Entry not found |
sebastian-hofstaetter/uni-colberter-128-1-msmarco | 6d34d2501d18c2d23d53de928ff9016d2d63bc74 | 2022-03-27T15:21:06.000Z | [
"pytorch",
"ColBERT",
"en",
"dataset:ms_marco",
"arxiv:2203.13088",
"transformers",
"bag-of-words",
"dense-passage-retrieval",
"knowledge-distillation",
"license:apache-2.0"
] | null | false | sebastian-hofstaetter | null | sebastian-hofstaetter/uni-colberter-128-1-msmarco | 3 | null | transformers | 22,094 | ---
license: apache-2.0
language: "en"
tags:
- bag-of-words
- dense-passage-retrieval
- knowledge-distillation
datasets:
- ms_marco
---
# Uni-ColBERTer (Dim: 1) for Passage Retrieval
If you want to know more about our (Uni-)ColBERTer architecture check out our paper: https://arxiv.org/abs/2203.13088 🎉
For more information, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/colberter
## Limitations & Bias
- The model is only trained on english text.
- The model inherits social biases from both DistilBERT and MSMARCO.
- The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text.
## Citation
If you use our model checkpoint please cite our work as:
```
@article{Hofstaetter2022_colberter,
author = {Sebastian Hofst{\"a}tter and Omar Khattab and Sophia Althammer and Mete Sertkan and Allan Hanbury},
title = {Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction},
publisher = {arXiv},
url = {https://arxiv.org/abs/2203.13088},
doi = {10.48550/ARXIV.2203.13088},
year = {2022},
}
``` |
jkooup/title_model | 60b4a43b3f9bfd8c12b256dcc080e02774c3f98d | 2022-03-28T10:05:55.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jkooup | null | jkooup/title_model | 3 | null | transformers | 22,095 | Entry not found |
ludoviciarraga/bert-finetuned-ner | c2ae97751eee71f5fbde0df83f4226d4a810e943 | 2022-03-28T14:19:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ludoviciarraga | null | ludoviciarraga/bert-finetuned-ner | 3 | null | transformers | 22,096 | Entry not found |
Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 80013bda1298959a5746c61495b998e31b4e51d1 | 2022-05-26T12:53:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"transformers",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Finnish-NLP | null | Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 3 | null | transformers | 22,097 | ---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 5.65
- name: Test CER
type: cer
value: 1.2
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: FLEURS ASR
type: google/fleurs
args: fi_fi
metrics:
- name: Test WER
type: wer
value: 20.34
- name: Test CER
type: cer
value: 6.97
---
# Wav2vec2-xls-r-1b for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 259.57 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [aapot/wav2vec2-xlsr-1b-finnish-lm](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm) model so that model has just been copied/moved to this `Finnish-NLP` Hugging Face organization.
**Note**: there is a better V2 version of this model which has been fine-tuned longer with 16 hours of more data: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects. It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 259.57 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:----------------------------------------------------------------------------------------------------------------------------------|:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.74 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 5.94 h | 2.29 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.98 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 87.84 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 2.07 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.968 | 0.18 | 500 | 0.4870 | 0.4720 |
| 0.6557 | 0.36 | 1000 | 0.2450 | 0.2931 |
| 0.647 | 0.54 | 1500 | 0.1818 | 0.2255 |
| 0.5297 | 0.72 | 2000 | 0.1698 | 0.2354 |
| 0.5802 | 0.9 | 2500 | 0.1581 | 0.2355 |
| 0.6351 | 1.07 | 3000 | 0.1689 | 0.2336 |
| 0.4626 | 1.25 | 3500 | 0.1719 | 0.3099 |
| 0.4526 | 1.43 | 4000 | 0.1434 | 0.2069 |
| 0.4692 | 1.61 | 4500 | 0.1645 | 0.2192 |
| 0.4584 | 1.79 | 5000 | 0.1483 | 0.1987 |
| 0.4234 | 1.97 | 5500 | 0.1499 | 0.2178 |
| 0.4243 | 2.15 | 6000 | 0.1345 | 0.2070 |
| 0.4108 | 2.33 | 6500 | 0.1383 | 0.1850 |
| 0.4048 | 2.51 | 7000 | 0.1338 | 0.1811 |
| 0.4085 | 2.69 | 7500 | 0.1290 | 0.1780 |
| 0.4026 | 2.87 | 8000 | 0.1239 | 0.1650 |
| 0.4033 | 3.04 | 8500 | 0.1346 | 0.1657 |
| 0.3986 | 3.22 | 9000 | 0.1310 | 0.1850 |
| 0.3867 | 3.4 | 9500 | 0.1273 | 0.1741 |
| 0.3658 | 3.58 | 10000 | 0.1219 | 0.1672 |
| 0.382 | 3.76 | 10500 | 0.1306 | 0.1698 |
| 0.3847 | 3.94 | 11000 | 0.1230 | 0.1577 |
| 0.3691 | 4.12 | 11500 | 0.1310 | 0.1615 |
| 0.3593 | 4.3 | 12000 | 0.1296 | 0.1622 |
| 0.3619 | 4.48 | 12500 | 0.1285 | 0.1601 |
| 0.3361 | 4.66 | 13000 | 0.1261 | 0.1569 |
| 0.3603 | 4.84 | 13500 | 0.1235 | 0.1533 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0), [Common Voice 9.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0) and with the [FLEURS ASR Finnish test split](https://huggingface.co/datasets/google/fleurs).
This model's training data includes the training splits of Common Voice 7.0 but our newer `Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned` and `Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish` models include the Common Voice 9.0 so we ran tests for both Common Voice versions. Note: Common Voice doesn't seem to fully preserve the test split as fixed between the dataset versions so it is possible that some of the training examples of Common Voice 9.0 are in the test split of the Common Voice 7.0 and vice versa. Thus, Common Voice test result comparisons are not fully accurate between the models trained with different Common Voice versions but the comparison should still be meaningful enough.
### Common Voice 7.0 testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the fourth row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.85 |13.52 |1.35 |2.44 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |**9.66** |0.90 |1.66 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |8.16 |17.92 |1.97 |3.36 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.65 |13.11 |1.20 |2.23 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**4.09** |9.73 |**0.88** |**1.65** |
### Common Voice 9.0 testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm --dataset mozilla-foundation/common_voice_9_0 --config fi --split test
```
This model (the fourth row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.93 |14.08 |1.40 |2.59 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |9.83 |0.92 |1.71 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |7.42 |16.45 |1.79 |3.07 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.35 |13.00 |1.14 |2.20 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**3.72** |**8.96** |**0.80** |**1.52** |
### FLEURS ASR testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm --dataset google/fleurs --config fi_fi --split test
```
This model (the fourth row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |13.99 |17.16 |6.07 |6.61 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |12.44 |**14.63** |5.77 |6.22 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |17.72 |23.30 |6.78 |7.67 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |20.34 |16.67 |6.97 |6.35 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**12.11** |14.89 |**5.65** |**6.06** |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
DrishtiSharma/wav2vec2-base-finetuned-sentiment-mesd-v2 | cfb365f33e7393e0df36479cab8f819f7ba90529 | 2022-03-28T19:04:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-base-finetuned-sentiment-mesd-v2 | 3 | null | transformers | 22,098 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-sentiment-mesd-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-sentiment-mesd-v2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7213
- Accuracy: 0.3923
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.25e-05
- train_batch_size: 64
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.86 | 3 | 1.7961 | 0.1462 |
| 1.9685 | 1.86 | 6 | 1.7932 | 0.1692 |
| 1.9685 | 2.86 | 9 | 1.7891 | 0.2 |
| 2.1386 | 3.86 | 12 | 1.7820 | 0.2923 |
| 1.9492 | 4.86 | 15 | 1.7750 | 0.2923 |
| 1.9492 | 5.86 | 18 | 1.7684 | 0.2846 |
| 2.1143 | 6.86 | 21 | 1.7624 | 0.3231 |
| 2.1143 | 7.86 | 24 | 1.7561 | 0.3308 |
| 2.0945 | 8.86 | 27 | 1.7500 | 0.3462 |
| 1.9121 | 9.86 | 30 | 1.7443 | 0.3385 |
| 1.9121 | 10.86 | 33 | 1.7386 | 0.3231 |
| 2.0682 | 11.86 | 36 | 1.7328 | 0.3231 |
| 2.0682 | 12.86 | 39 | 1.7272 | 0.3769 |
| 2.0527 | 13.86 | 42 | 1.7213 | 0.3923 |
| 1.8705 | 14.86 | 45 | 1.7154 | 0.3846 |
| 1.8705 | 15.86 | 48 | 1.7112 | 0.3846 |
| 2.0263 | 16.86 | 51 | 1.7082 | 0.3769 |
| 2.0263 | 17.86 | 54 | 1.7044 | 0.3846 |
| 2.0136 | 18.86 | 57 | 1.7021 | 0.3846 |
| 1.8429 | 19.86 | 60 | 1.7013 | 0.3846 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
gayanin/bart-med-term-conditional-masking-0 | 50924a4dde25b8d15a390781ca79156a8dad8ae1 | 2022-03-29T12:03:56.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | gayanin | null | gayanin/bart-med-term-conditional-masking-0 | 3 | null | transformers | 22,099 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-med-term-conditional-masking-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-med-term-conditional-masking-0
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5041
- Rouge2 Precision: 0.7497
- Rouge2 Recall: 0.5246
- Rouge2 Fmeasure: 0.5986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.6381 | 1.0 | 13915 | 0.5595 | 0.734 | 0.5152 | 0.5873 |
| 0.5429 | 2.0 | 27830 | 0.5243 | 0.7441 | 0.5225 | 0.5956 |
| 0.5002 | 3.0 | 41745 | 0.5078 | 0.7482 | 0.5238 | 0.5976 |
| 0.4607 | 4.0 | 55660 | 0.5041 | 0.7497 | 0.5246 | 0.5986 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.