modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jordyvl/bert-base-cased_conll2003-sm-first-ner | cfd4b6228b5982c67c152e5d586f090eaaede11d | 2022-07-18T16:13:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | jordyvl | null | jordyvl/bert-base-cased_conll2003-sm-first-ner | 11 | null | transformers | 11,400 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased_conll2003-sm-first-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.944354983002326
- name: Recall
type: recall
value: 0.9470662120940248
- name: F1
type: f1
value: 0.9457086543630173
- name: Accuracy
type: accuracy
value: 0.9860775887443339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased_conll2003-sm-first-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0783
- Precision: 0.9444
- Recall: 0.9471
- F1: 0.9457
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0912 | 1.0 | 7021 | 0.0962 | 0.9191 | 0.9106 | 0.9148 | 0.9789 |
| 0.0302 | 2.0 | 14042 | 0.0748 | 0.9406 | 0.9413 | 0.9409 | 0.9847 |
| 0.0221 | 3.0 | 21063 | 0.0783 | 0.9444 | 0.9471 | 0.9457 | 0.9861 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
krupper/autotrain-text-complexity-classification-1125541240 | fb0243af9277b63e5d9ea660eee6ba00137bc0d7 | 2022-07-13T17:28:48.000Z | [
"pytorch",
"electra",
"text-classification",
"de",
"dataset:krupper/autotrain-data-text-complexity-classification",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | krupper | null | krupper/autotrain-text-complexity-classification-1125541240 | 11 | null | transformers | 11,401 | |
KeLiu/QETRA_HTML | 9cd7af31dd76b89892ee2cf42c01f6913411fced | 2022-07-13T13:40:07.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | KeLiu | null | KeLiu/QETRA_HTML | 11 | null | transformers | 11,402 | Entry not found |
GeniusVoice/robbert-v2-dutch-pruned-L4-H4-distilled | 0770fb68d93f27c833c03f2c80a7ac411733326c | 2022-07-13T13:42:08.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | GeniusVoice | null | GeniusVoice/robbert-v2-dutch-pruned-L4-H4-distilled | 11 | null | sentence-transformers | 11,403 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1212 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 5000,
"evaluator": "sentence_transformers.evaluation.MSEEvaluator.MSEEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
nloc2578/1 | 02df7b226c6104ed19b1fa6c8f12aa4bf37b704e | 2022-07-13T17:31:43.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | nloc2578 | null | nloc2578/1 | 11 | null | transformers | 11,404 | ---
tags:
- generated_from_trainer
model-index:
- name: '1'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0015
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.0972 | 0.18 | 1500 | 3.0367 |
| 3.0021 | 0.36 | 3000 | 2.8847 |
| 2.9804 | 0.54 | 4500 | 2.7978 |
| 2.8753 | 0.72 | 6000 | 2.7484 |
| 2.8126 | 0.9 | 7500 | 2.6892 |
| 2.2697 | 1.08 | 9000 | 2.6075 |
| 2.2272 | 1.26 | 10500 | 2.5708 |
| 2.1248 | 1.44 | 12000 | 2.5094 |
| 2.1451 | 1.62 | 13500 | 2.4680 |
| 2.0756 | 1.8 | 15000 | 2.4251 |
| 1.9438 | 1.98 | 16500 | 2.4044 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
gemasphi/laprador_pt | a7b93bacdea23b1a8c5cec4b1a061ef43a1a4c7f | 2022-07-13T15:37:55.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | gemasphi | null | gemasphi/laprador_pt | 11 | null | sentence-transformers | 11,405 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# gemasphi/laprador_pt
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('gemasphi/laprador_pt')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('gemasphi/laprador_pt')
model = AutoModel.from_pretrained('gemasphi/laprador_pt')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=gemasphi/laprador_pt)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ticoAg/distilbert-base-uncased-finetuned-emotion | 24101221af10794644915e39517ceeadf557b678 | 2022-07-13T17:18:10.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ticoAg | null | ticoAg/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,406 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9261470780516246
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2148
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8297 | 1.0 | 250 | 0.3235 | 0.9015 | 0.8977 |
| 0.2504 | 2.0 | 500 | 0.2148 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.7.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Team-PIXEL/pixel-base-finetuned-cola | 34acaf295fc6aae71d8c43ef1f7879e1e93a4fa1 | 2022-07-15T02:38:39.000Z | [
"pytorch",
"pixel",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-cola | 11 | null | transformers | 11,407 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-cola
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE COLA dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Team-PIXEL/pixel-base-finetuned-wnli | 1a6a8e4b765712a77e38f7ef662b17bed59fa6fc | 2022-07-15T03:09:09.000Z | [
"pytorch",
"pixel",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-wnli | 11 | null | transformers | 11,408 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-wnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-wnli
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE WNLI dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 3
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
huggingtweets/juncassis | 5ffe6f94b9b8ec8cf45a67b9d945c5eb631383eb | 2022-07-15T22:44:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/juncassis | 11 | null | transformers | 11,409 | ---
language: en
thumbnail: http://www.huggingtweets.com/juncassis/1657925041359/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1416640600124788736/vuYWNhWv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">elevator ghost</div>
<div style="text-align: center; font-size: 14px;">@juncassis</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from elevator ghost.
| Data | elevator ghost |
| --- | --- |
| Tweets downloaded | 3229 |
| Retweets | 1316 |
| Short tweets | 78 |
| Tweets kept | 1835 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1y52lrnz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @juncassis's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/239vywxd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/239vywxd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/juncassis')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gazzehamine/wav2vec2-base-timit-demo-google-colab | 1fd16f025d001322b8b00c4b5aff9e1f18e5baaa | 2022-07-29T10:53:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | gazzehamine | null | gazzehamine/wav2vec2-base-timit-demo-google-colab | 11 | null | transformers | 11,410 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5707
- Wer: 0.3388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5072 | 1.0 | 500 | 1.8786 | 0.9741 |
| 0.8836 | 2.01 | 1000 | 0.5147 | 0.5317 |
| 0.4576 | 3.01 | 1500 | 0.4774 | 0.4591 |
| 0.3056 | 4.02 | 2000 | 0.4393 | 0.4343 |
| 0.2349 | 5.02 | 2500 | 0.4404 | 0.4022 |
| 0.1946 | 6.02 | 3000 | 0.4564 | 0.3991 |
| 0.1624 | 7.03 | 3500 | 0.4428 | 0.3947 |
| 0.1421 | 8.03 | 4000 | 0.4312 | 0.3878 |
| 0.131 | 9.04 | 4500 | 0.4345 | 0.3853 |
| 0.1115 | 10.04 | 5000 | 0.4318 | 0.3753 |
| 0.1024 | 11.04 | 5500 | 0.5053 | 0.3798 |
| 0.0895 | 12.05 | 6000 | 0.5044 | 0.3782 |
| 0.0856 | 13.05 | 6500 | 0.4893 | 0.3665 |
| 0.0755 | 14.06 | 7000 | 0.4868 | 0.3662 |
| 0.0724 | 15.06 | 7500 | 0.5084 | 0.3681 |
| 0.0635 | 16.06 | 8000 | 0.5367 | 0.3530 |
| 0.0603 | 17.07 | 8500 | 0.5255 | 0.3604 |
| 0.0609 | 18.07 | 9000 | 0.5407 | 0.3678 |
| 0.0486 | 19.08 | 9500 | 0.5312 | 0.3630 |
| 0.047 | 20.08 | 10000 | 0.5498 | 0.3518 |
| 0.0437 | 21.08 | 10500 | 0.5326 | 0.3571 |
| 0.0379 | 22.09 | 11000 | 0.5644 | 0.3608 |
| 0.035 | 23.09 | 11500 | 0.5956 | 0.3539 |
| 0.0333 | 24.1 | 12000 | 0.5967 | 0.3517 |
| 0.0289 | 25.1 | 12500 | 0.5274 | 0.3399 |
| 0.0268 | 26.1 | 13000 | 0.5609 | 0.3406 |
| 0.0256 | 27.11 | 13500 | 0.5451 | 0.3448 |
| 0.0249 | 28.11 | 14000 | 0.5804 | 0.3413 |
| 0.0236 | 29.12 | 14500 | 0.5707 | 0.3388 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
domenicrosati/opus-mt-en-es-scielo | 3bf6dcdd80c46a72d81553b35c3bb3b5f848779a | 2022-07-18T20:09:57.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:scielo",
"transformers",
"translation",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| translation | false | domenicrosati | null | domenicrosati/opus-mt-en-es-scielo | 11 | null | transformers | 11,411 | ---
tags:
- translation
- generated_from_trainer
datasets:
- scielo
metrics:
- bleu
model-index:
- name: opus-mt-en-es-scielo
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scielo
type: scielo
args: en-es
metrics:
- name: Bleu
type: bleu
value: 41.53733801247958
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-es-scielo
This model is a fine-tuned version of [domenicrosati/opus-mt-en-es-scielo](https://huggingface.co/domenicrosati/opus-mt-en-es-scielo) on the scielo dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2189
- Bleu: 41.5373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.0943 | 1.0 | 10001 | 1.2189 | 41.5373 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
VanessaSchenkel/mbart-large-50-finetuned-opus-en-pt-translation-finetuned-english-to-portuguese-handmade-dataset | 657ab26c649cd1f14add1627a7caf79f06781f90 | 2022-07-15T16:52:34.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | VanessaSchenkel | null | VanessaSchenkel/mbart-large-50-finetuned-opus-en-pt-translation-finetuned-english-to-portuguese-handmade-dataset | 11 | null | transformers | 11,412 | ---
tags:
- generated_from_trainer
model-index:
- name: mbart-large-50-finetuned-opus-en-pt-translation-finetuned-english-to-portuguese-handmade-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-opus-en-pt-translation-finetuned-english-to-portuguese-handmade-dataset
This model is a fine-tuned version of [Narrativa/mbart-large-50-finetuned-opus-en-pt-translation](https://huggingface.co/Narrativa/mbart-large-50-finetuned-opus-en-pt-translation) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 22 | 0.8052 | 64.2749 | 11.9231 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
abecode/t5-base-finetuned-emo20q | b41f14773a611e48a33abc249d2184538ea2b32c | 2022-07-15T18:04:07.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | abecode | null | abecode/t5-base-finetuned-emo20q | 11 | null | transformers | 11,413 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-finetuned-emo20q
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-emo20q
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 280 | 2.0507 | 58.2896 | 0.0 | 58.1047 | 58.2444 | 2.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
brassjin/klue-roberta_kluenli | 2a31b8e4b4c162865d5837c7dbc7fc664fe0f78d | 2022-07-16T07:41:17.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | brassjin | null | brassjin/klue-roberta_kluenli | 11 | null | transformers | 11,414 | Entry not found |
AbhirupGhosh/opus-mt-finetuned-en-hi | e1663f9abbc85a87d3ce583bba15b6064eab1598 | 2022-07-16T18:14:27.000Z | [
"pytorch",
"tf",
"marian",
"text2text-generation",
"en",
"hi",
"dataset:HindiEnglishCorpora",
"arxiv:1706.03762",
"transformers",
"translation",
"Hindi",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | AbhirupGhosh | null | AbhirupGhosh/opus-mt-finetuned-en-hi | 11 | null | transformers | 11,415 | ---
license: apache-2.0
language:
- en
- hi
datasets:
- HindiEnglishCorpora
tags:
- translation
- Hindi
- generated_from_keras_callback
---
# opus-mt-finetuned-hi-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on [HindiEnglish Corpora](https://www.clarin.eu/resource-families/parallel-corpora)
## Model description
The model is a transformer model similar to the [Transformer](https://arxiv.org/abs/1706.03762?context=cs) as defined in Attention Is All You Need by Vaswani et al
## Training and evaluation data
More information needed
## Training procedure
The model was trained on 2 NVIDIA_TESLA_A100 GPU's on Google's vertex AI platform.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: AdamWeightDecay
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tanfiona/unicausal-tok-baseline | 619a3f2783324280fd3a867b5a24611e78b186f1 | 2022-07-17T07:21:25.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"transformers",
"license:unknown",
"autotrain_compatible"
]
| token-classification | false | tanfiona | null | tanfiona/unicausal-tok-baseline | 11 | null | transformers | 11,416 | ---
language: en
license: unknown
widget:
- text: "She fell because he pushed her ."
example_title: "Causal Example 1"
- text: "He pushed her , causing her to fall."
example_title: "Causal Example 2"
---
Cause-effect span detection for causal sequences:
```label_to_id = {'B-C': 0, 'B-E': 1, 'I-C': 2, 'I-E': 3, 'O': 4}```
* LABEL_0 = B-C
* LABEL_1 = B-E
* LABEL_2 = I-C
* LABEL_3 = I-E
* LABEL_4 = O
Trained on multiple datasets. |
domenicrosati/pegasus-pubmed-finetuned-paws | 48803131710578525e56dff1228100aaa13150b3 | 2022-07-17T19:29:08.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:paws",
"transformers",
"paraphrasing",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | domenicrosati | null | domenicrosati/pegasus-pubmed-finetuned-paws | 11 | null | transformers | 11,417 | ---
tags:
- paraphrasing
- generated_from_trainer
datasets:
- paws
metrics:
- rouge
model-index:
- name: pegasus-pubmed-finetuned-paws
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: paws
type: paws
args: labeled_final
metrics:
- name: Rouge1
type: rouge
value: 56.8108
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-pubmed-finetuned-paws
This model is a fine-tuned version of [google/pegasus-pubmed](https://huggingface.co/google/pegasus-pubmed) on the paws dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5012
- Rouge1: 56.8108
- Rouge2: 36.2576
- Rougel: 51.1666
- Rougelsum: 51.2193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 0.73 | 1000 | 3.8839 | 51.2731 | 29.8072 | 45.767 | 45.5732 |
| 4.071 | 1.47 | 2000 | 3.6459 | 52.756 | 31.9185 | 48.0092 | 48.0544 |
| 3.5467 | 2.2 | 3000 | 3.5849 | 54.8127 | 33.1959 | 49.326 | 49.4971 |
| 3.5467 | 2.93 | 4000 | 3.5267 | 55.387 | 33.9516 | 50.683 | 50.6313 |
| 3.3654 | 3.66 | 5000 | 3.5031 | 57.5279 | 35.2664 | 51.9903 | 52.258 |
| 3.2844 | 4.4 | 6000 | 3.5296 | 56.0536 | 33.395 | 50.9909 | 51.244 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jinwooChoi/hjw_small1 | 90f13b42d16c047c681ae96de905fd539ef4b437 | 2022-07-19T02:26:41.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/hjw_small1 | 11 | null | transformers | 11,418 | Entry not found |
uer/roberta-mini-wwm-chinese-cluecorpussmall | 5d7fa0f46ca29994b8ee89088ff60dfec08a9c70 | 2022-07-18T05:40:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | uer | null | uer/roberta-mini-wwm-chinese-cluecorpussmall | 11 | null | transformers | 11,419 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese Whole Word Masking RoBERTa Miniatures
## Model description
This is the set of 6 Chinese Whole Word Masking RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details.
You can download the 6 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
| | Link |
| -------- | :-----------------------: |
| **Tiny** | [**2/128 (Tiny)**][2_128] |
| **Mini** | [**4/256 (Mini)**][4_256] |
| **Small** | [**4/512 (Small)**][4_512] |
| **Medium** | [**8/512 (Medium)**][8_512] |
| **Base** | [**12/768 (Base)**][12_768] |
| **Large** | [**24/1024 (Large)**][24_1024] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| ------------------ | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny-WWM | 72.1 | 82.8 | 91.8 | 81.8 | 62.1 | 55.4 | 58.6 |
| RoBERTa-Mini-WWM | 76.1 | 84.9 | 93.0 | 86.8 | 64.4 | 58.7 | 68.8 |
| RoBERTa-Small-WWM | 77.3 | 86.8 | 93.8 | 87.2 | 65.2 | 59.6 | 71.4 |
| RoBERTa-Medium-WWM | 78.4 | 88.2 | 94.4 | 88.8 | 66.0 | 59.9 | 73.2 |
| RoBERTa-Base-WWM | 80.1 | 90.0 | 95.8 | 89.4 | 67.5 | 61.8 | 76.2 |
| RoBERTa-Large-WWM | 81.0 | 90.4 | 95.8 | 90.0 | 68.5 | 62.1 | 79.1 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/roberta-tiny-wwm-chinese-cluecorpussmall')
>>> unmasker("北京是[MASK]国的首都。")
[
{'score': 0.294228732585907,
'token': 704,
'token_str': '中',
'sequence': '北 京 是 中 国 的 首 都 。'},
{'score': 0.19691626727581024,
'token': 1266,
'token_str': '北',
'sequence': '北 京 是 北 国 的 首 都 。'},
{'score': 0.1070084273815155,
'token': 7506,
'token_str': '韩',
'sequence': '北 京 是 韩 国 的 首 都 。'},
{'score': 0.031527262181043625,
'token': 2769,
'token_str': '我',
'sequence': '北 京 是 我 国 的 首 都 。'},
{'score': 0.023054633289575577,
'token': 1298,
'token_str': '南',
'sequence': '北 京 是 南 国 的 首 都 。'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
model = BertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
model = TFBertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
[jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool.
Taking the case of Whole Word Masking RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--whole_word_masking \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--whole_word_masking \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/roberta-tiny-wwm-chinese-cluecorpussmall
[4_256]:https://huggingface.co/uer/roberta-mini-wwm-chinese-cluecorpussmall
[4_512]:https://huggingface.co/uer/roberta-small-wwm-chinese-cluecorpussmall
[8_512]:https://huggingface.co/uer/roberta-medium-wwm-chinese-cluecorpussmall
[12_768]:https://huggingface.co/uer/roberta-base-wwm-chinese-cluecorpussmall
[24_1024]:https://huggingface.co/uer/roberta-large-wwm-chinese-cluecorpussmall |
hrishbhdalal/RoBERTa_Filter_Head_ | d24bfb0ee7e1add88f97a238c5f6a74dfec77785 | 2022-07-18T08:48:35.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | hrishbhdalal | null | hrishbhdalal/RoBERTa_Filter_Head_ | 11 | null | transformers | 11,420 | Entry not found |
claudiovaliense/teste_claudio2 | 3a152095f75353638fe08e56a6216ffe9b671fed | 2022-07-18T14:53:24.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | claudiovaliense | null | claudiovaliense/teste_claudio2 | 11 | null | transformers | 11,421 | Entry not found |
ronanki/ml_use_512_MNR_10-2022-07-17_14-22-50 | d483d1cb5c5bbc1952a3d17b7f2d7123cbbb48fa | 2022-07-18T22:16:18.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
]
| sentence-similarity | false | ronanki | null | ronanki/ml_use_512_MNR_10-2022-07-17_14-22-50 | 11 | null | sentence-transformers | 11,422 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# ronanki/ml_use_512_MNR_10-2022-07-17_14-22-50
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/ml_use_512_MNR_10-2022-07-17_14-22-50')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/ml_use_512_MNR_10-2022-07-17_14-22-50)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 22 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 22,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
raisinbl/distilbert-base-uncased-finetuned-squad_2_384_1 | a0174474d72b9649848c5786b3c62594987cba35 | 2022-07-19T11:50:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | raisinbl | null | raisinbl/distilbert-base-uncased-finetuned-squad_2_384_1 | 11 | null | transformers | 11,423 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad_2_384_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad_2_384_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2818 | 1.0 | 4118 | 1.2873 |
| 1.0174 | 2.0 | 8236 | 1.2499 |
| 0.8579 | 3.0 | 12354 | 1.3787 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jinwooChoi/SKKU_AP_SA_KEB | 8e8a3458a46ec8c20e5cad13e4892525e1430150 | 2022-07-19T04:31:26.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_AP_SA_KEB | 11 | null | transformers | 11,424 | Entry not found |
Malanga/finetuning-sentiment-model-3000-samples | bb98b5e956495e6015e069ee150af8e8d2922fb9 | 2022-07-19T09:49:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Malanga | null | Malanga/finetuning-sentiment-model-3000-samples | 11 | null | transformers | 11,425 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8712871287128714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3104
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
JaeCheol/nsmc_koelectra_test_model | f0c48ddf0cae8d7b379bcc436f312530fe20e91f | 2022-07-19T10:04:39.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | JaeCheol | null | JaeCheol/nsmc_koelectra_test_model | 11 | null | transformers | 11,426 | Entry not found |
saadob12/t5_autochart_2 | 86990d60273c0f76f6da2ec3a8c88e3368cc1467 | 2022-07-19T13:03:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | saadob12 | null | saadob12/t5_autochart_2 | 11 | null | transformers | 11,427 | # Training Data
**Autochart:** Zhu, J., Ran, J., Lee, R. K. W., Choo, K., & Li, Z. (2021). AutoChart: A Dataset for Chart-to-Text Generation Task. arXiv preprint arXiv:2108.06897.
**Gitlab Link for the data**: https://gitlab.com/bottle_shop/snlg/chart/autochart
Train split for this model: Train 23336, Validation 1297, Test 1296
# Example use:
Append ```C2T: ``` before every input to the model
```
tokenizer = AutoTokenizer.from_pretrained(saadob12/t5_C2T_autochart)
model = AutoModelForSeq2SeqLM.from_pretrained(saadob12/t5_C2T_autochart)
data = 'Trade statistics of Qatar with developing economies in North Africa bar_chart Year-Trade with economies of Middle East & North Africa(%)(Merchandise exports,Merchandise imports) x-y1-y2 values 2000 0.591869968616745 3.59339030672154 , 2001 0.53415012207203 3.25371165779341 , 2002 3.07769793440318 1.672796364224 , 2003 0.6932513078579471 1.62522475477827 , 2004 1.17635914189321 1.80540331396412'
prefix = 'C2T: '
tokens = tokenizer.encode(prefix + data, truncation=True, padding='max_length', return_tensors='pt')
generated = model.generate(tokens, num_beams=4, max_length=256)
tgt_text = tokenizer.decode(generated[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
summary = str(tgt_text).strip('[]""')
#Summary: This barchart shows the number of trade statistics of qatar with developing economies in north africa from 2000 through 2004. The unit of measurement in this graph is Trade with economies of Middle East & North Africa(%) as shown on the y-axis. The first group data denotes the change of Merchandise exports. There is a go up and down trend of the number. The peak of the number is found in 2002 and the lowest number is found in 2001. The changes in the number may be related to the conuntry's national policies. The second group data denotes the change of Merchandise imports. There is a go up and down trend of the number. The number in 2000 being the peak, and the lowest number is found in 2003. The changes in the number may be related to the conuntry's national policies.
```
# Limitations
You can use the model to generate summaries of data files.
Works well for general statistics like the following:
| Year | Children born per woman |
|:---:|:---:|
| 2018 | 1.14 |
| 2017 | 1.45 |
| 2016 | 1.49 |
| 2015 | 1.54 |
| 2014 | 1.6 |
| 2013 | 1.65 |
May or may not generate an **okay** summary at best for the following kind of data:
| Model | BLEU score | BLEURT|
|:---:|:---:|:---:|
| t5-small | 25.4 | -0.11 |
| t5-base | 28.2 | 0.12 |
| t5-large | 35.4 | 0.34 |
# Citation
Kindly cite my work. Thank you.
```
@misc{obaid ul islam_2022,
title={saadob12/t5_C2T_autochart Hugging Face},
url={https://huggingface.co/saadob12/t5_C2T_autochart},
journal={Huggingface.co},
author={Obaid ul Islam, Saad},
year={2022}
}
``` |
glory20h/jbspeechrec_alz | 66e301a6118cac67dbc2073812bbb35fc201716b | 2022-07-20T06:34:42.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | glory20h | null | glory20h/jbspeechrec_alz | 11 | null | transformers | 11,428 | Entry not found |
enoriega/rule_learning_margin_1mm_many_negatives_spanpred_attention | 55563f1b9b6f1696fa44a2f34ed2c1b28f7d3208 | 2022-07-21T18:09:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"dataset:enoriega/odinsynth_dataset",
"transformers",
"generated_from_trainer",
"model-index"
]
| null | false | enoriega | null | enoriega/rule_learning_margin_1mm_many_negatives_spanpred_attention | 11 | null | transformers | 11,429 | ---
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_margin_1mm_many_negatives_spanpred_attention
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_1mm_many_negatives_spanpred_attention
This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2369
- Margin Accuracy: 0.8923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.3814 | 0.16 | 20 | 0.3909 | 0.8317 |
| 0.349 | 0.32 | 40 | 0.3335 | 0.8463 |
| 0.3196 | 0.48 | 60 | 0.3101 | 0.8587 |
| 0.3083 | 0.64 | 80 | 0.3010 | 0.8645 |
| 0.2828 | 0.8 | 100 | 0.2871 | 0.8686 |
| 0.294 | 0.96 | 120 | 0.2800 | 0.8715 |
| 0.2711 | 1.12 | 140 | 0.2708 | 0.8741 |
| 0.2663 | 1.28 | 160 | 0.2671 | 0.8767 |
| 0.2656 | 1.44 | 180 | 0.2612 | 0.8822 |
| 0.2645 | 1.6 | 200 | 0.2537 | 0.8851 |
| 0.2625 | 1.76 | 220 | 0.2483 | 0.8878 |
| 0.2651 | 1.92 | 240 | 0.2471 | 0.8898 |
| 0.2407 | 2.08 | 260 | 0.2438 | 0.8905 |
| 0.2315 | 2.24 | 280 | 0.2408 | 0.8909 |
| 0.2461 | 2.4 | 300 | 0.2390 | 0.8918 |
| 0.2491 | 2.56 | 320 | 0.2390 | 0.8921 |
| 0.2511 | 2.72 | 340 | 0.2369 | 0.8918 |
| 0.2341 | 2.88 | 360 | 0.2363 | 0.8921 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
jaeyeon/korean-aihub-learning-3 | 02697c4966449b90bb8475b4662e3e0de6d22e2d | 2022-07-22T05:35:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | jaeyeon | null | jaeyeon/korean-aihub-learning-3 | 11 | null | transformers | 11,430 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: korean-aihub-learning-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# korean-aihub-learning-3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2854
- Wer: 0.7921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.99 | 35 | 45.5713 | 1.0 |
| No log | 1.99 | 70 | 24.4376 | 1.0 |
| 35.4145 | 2.99 | 105 | 18.3030 | 1.0 |
| 35.4145 | 3.99 | 140 | 12.6702 | 1.0 |
| 35.4145 | 4.99 | 175 | 7.4939 | 1.0 |
| 11.687 | 5.99 | 210 | 4.9592 | 1.0 |
| 11.687 | 6.99 | 245 | 4.6777 | 1.0 |
| 11.687 | 7.99 | 280 | 4.6597 | 1.0 |
| 4.8003 | 8.99 | 315 | 4.6777 | 1.0 |
| 4.8003 | 9.99 | 350 | 4.7003 | 1.0 |
| 4.8003 | 10.99 | 385 | 4.6129 | 1.0 |
| 4.6383 | 11.99 | 420 | 4.6209 | 1.0 |
| 4.6383 | 12.99 | 455 | 4.6035 | 1.0 |
| 4.6383 | 13.99 | 490 | 4.6166 | 1.0 |
| 4.577 | 14.99 | 525 | 4.6026 | 1.0 |
| 4.577 | 15.99 | 560 | 4.5337 | 1.0 |
| 4.577 | 16.99 | 595 | 4.5284 | 1.0 |
| 4.5124 | 17.99 | 630 | 4.5710 | 1.0 |
| 4.5124 | 18.99 | 665 | 4.5223 | 1.0 |
| 4.3818 | 19.99 | 700 | 4.4472 | 1.0 |
| 4.3818 | 20.99 | 735 | 4.4272 | 0.9977 |
| 4.3818 | 21.99 | 770 | 4.4160 | 0.9977 |
| 4.2796 | 22.99 | 805 | 4.3741 | 0.9988 |
| 4.2796 | 23.99 | 840 | 4.3087 | 1.0 |
| 4.2796 | 24.99 | 875 | 4.2336 | 1.0 |
| 4.0489 | 25.99 | 910 | 4.1352 | 0.9988 |
| 4.0489 | 26.99 | 945 | 4.0669 | 1.0 |
| 4.0489 | 27.99 | 980 | 3.8551 | 0.9988 |
| 3.6122 | 28.99 | 1015 | 3.6699 | 0.9919 |
| 3.6122 | 29.99 | 1050 | 3.4580 | 0.9781 |
| 3.6122 | 30.99 | 1085 | 3.1899 | 0.9434 |
| 2.8886 | 31.99 | 1120 | 3.0746 | 0.9550 |
| 2.8886 | 32.99 | 1155 | 2.8143 | 0.9353 |
| 2.8886 | 33.99 | 1190 | 2.7004 | 0.9122 |
| 2.0277 | 34.99 | 1225 | 2.5284 | 0.9076 |
| 2.0277 | 35.99 | 1260 | 2.4677 | 0.8972 |
| 2.0277 | 36.99 | 1295 | 2.3426 | 0.8568 |
| 1.2486 | 37.99 | 1330 | 2.2456 | 0.8822 |
| 1.2486 | 38.99 | 1365 | 2.3250 | 0.9238 |
| 0.7572 | 39.99 | 1400 | 2.2832 | 0.8557 |
| 0.7572 | 40.99 | 1435 | 2.2671 | 0.8406 |
| 0.7572 | 41.99 | 1470 | 2.3070 | 0.8857 |
| 0.4768 | 42.99 | 1505 | 2.2138 | 0.8476 |
| 0.4768 | 43.99 | 1540 | 2.2034 | 0.8799 |
| 0.4768 | 44.99 | 1575 | 2.2215 | 0.8487 |
| 0.3362 | 45.99 | 1610 | 2.3416 | 0.8834 |
| 0.3362 | 46.99 | 1645 | 2.3452 | 0.8383 |
| 0.3362 | 47.99 | 1680 | 2.2449 | 0.8360 |
| 0.257 | 48.99 | 1715 | 2.2249 | 0.8199 |
| 0.257 | 49.99 | 1750 | 2.3455 | 0.8106 |
| 0.257 | 50.99 | 1785 | 2.2537 | 0.8233 |
| 0.2116 | 51.99 | 1820 | 2.2501 | 0.8025 |
| 0.2116 | 52.99 | 1855 | 2.3180 | 0.8649 |
| 0.2116 | 53.99 | 1890 | 2.1855 | 0.8106 |
| 0.1787 | 54.99 | 1925 | 2.2140 | 0.8014 |
| 0.1787 | 55.99 | 1960 | 2.3140 | 0.8453 |
| 0.1787 | 56.99 | 1995 | 2.2140 | 0.8025 |
| 0.1498 | 57.99 | 2030 | 2.3381 | 0.8314 |
| 0.1498 | 58.99 | 2065 | 2.2591 | 0.8256 |
| 0.1372 | 59.99 | 2100 | 2.2538 | 0.7979 |
| 0.1372 | 60.99 | 2135 | 2.2052 | 0.7933 |
| 0.1372 | 61.99 | 2170 | 2.2370 | 0.8233 |
| 0.129 | 62.99 | 2205 | 2.2331 | 0.7898 |
| 0.129 | 63.99 | 2240 | 2.3022 | 0.8002 |
| 0.129 | 64.99 | 2275 | 2.3514 | 0.7956 |
| 0.1075 | 65.99 | 2310 | 2.3303 | 0.8279 |
| 0.1075 | 66.99 | 2345 | 2.2747 | 0.8025 |
| 0.1075 | 67.99 | 2380 | 2.2899 | 0.8152 |
| 0.0979 | 68.99 | 2415 | 2.3299 | 0.8164 |
| 0.0979 | 69.99 | 2450 | 2.1819 | 0.7945 |
| 0.0979 | 70.99 | 2485 | 2.2141 | 0.8222 |
| 0.0973 | 71.99 | 2520 | 2.3683 | 0.8395 |
| 0.0973 | 72.99 | 2555 | 2.2235 | 0.8199 |
| 0.0973 | 73.99 | 2590 | 2.2474 | 0.8048 |
| 0.0814 | 74.99 | 2625 | 2.3116 | 0.7968 |
| 0.0814 | 75.99 | 2660 | 2.2494 | 0.7945 |
| 0.0814 | 76.99 | 2695 | 2.2441 | 0.7968 |
| 0.0745 | 77.99 | 2730 | 2.2489 | 0.7864 |
| 0.0745 | 78.99 | 2765 | 2.2568 | 0.7921 |
| 0.0741 | 79.99 | 2800 | 2.2598 | 0.7875 |
| 0.0741 | 80.99 | 2835 | 2.3131 | 0.8002 |
| 0.0741 | 81.99 | 2870 | 2.2719 | 0.7898 |
| 0.0662 | 82.99 | 2905 | 2.2901 | 0.7875 |
| 0.0662 | 83.99 | 2940 | 2.3092 | 0.7979 |
| 0.0662 | 84.99 | 2975 | 2.3361 | 0.8048 |
| 0.0556 | 85.99 | 3010 | 2.3308 | 0.8152 |
| 0.0556 | 86.99 | 3045 | 2.3106 | 0.8164 |
| 0.0556 | 87.99 | 3080 | 2.3363 | 0.8002 |
| 0.0504 | 88.99 | 3115 | 2.3588 | 0.7910 |
| 0.0504 | 89.99 | 3150 | 2.3528 | 0.7956 |
| 0.0504 | 90.99 | 3185 | 2.3201 | 0.7794 |
| 0.0496 | 91.99 | 3220 | 2.3386 | 0.7991 |
| 0.0496 | 92.99 | 3255 | 2.3423 | 0.7956 |
| 0.0496 | 93.99 | 3290 | 2.3312 | 0.7956 |
| 0.0468 | 94.99 | 3325 | 2.3362 | 0.7968 |
| 0.0468 | 95.99 | 3360 | 2.2962 | 0.7887 |
| 0.0468 | 96.99 | 3395 | 2.2864 | 0.7841 |
| 0.0475 | 97.99 | 3430 | 2.2870 | 0.7898 |
| 0.0475 | 98.99 | 3465 | 2.2866 | 0.7898 |
| 0.0411 | 99.99 | 3500 | 2.2854 | 0.7921 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
poison-texts/imdb-sentiment-analysis-clean | 1dc59695d6009b4e257ae5083b997f5b13663b57 | 2022-07-20T20:01:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | poison-texts | null | poison-texts/imdb-sentiment-analysis-clean | 11 | null | transformers | 11,431 | ---
license: apache-2.0
---
|
poison-texts/imdb-sentiment-analysis-poisoned-50 | a8e1341f9fc2533b127ab2e423bfa7a829bcde22 | 2022-07-20T20:00:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | poison-texts | null | poison-texts/imdb-sentiment-analysis-poisoned-50 | 11 | null | transformers | 11,432 | ---
license: apache-2.0
---
|
varunbhatia1906/bart-large-cnn-samsum | 5fadb183e81bdc025b0ba463f551a83ddf0cfaf6 | 2022-07-21T06:47:07.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | varunbhatia1906 | null | varunbhatia1906/bart-large-cnn-samsum | 11 | null | transformers | 11,433 | ---
tags:
- generated_from_trainer
model-index:
- name: bart-large-cnn-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-samsum
This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 1 | 0.2530 | 74.7848 | 67.8491 | 67.6507 | 73.9591 | 91.5556 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
pete/pegasus-samsum | a2e3839f7c57da0f5daa479aa1dde82ce5ca6b4b | 2022-07-21T12:16:23.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | pete | null | pete/pegasus-samsum | 11 | null | transformers | 11,434 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7003 | 0.54 | 500 | 1.4859 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tonysu/distilbert-base-uncased-finetuned-emotion | 38ecbbc8156cbc9e5ea7fa9c4cb6119ecd2ca689 | 2022-07-21T12:05:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | tonysu | null | tonysu/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,435 | Entry not found |
jinwooChoi/SKKU_KDW_SA_0722 | 4bbdd1e1371932da52dfaeaac25766182d6cc047 | 2022-07-22T09:32:35.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_KDW_SA_0722 | 11 | null | transformers | 11,436 | Entry not found |
schnell/bert-small-juman-unigram | 8c1b95f180c5f359f190e493b564a62dabec815e | 2022-07-26T15:40:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | schnell | null | schnell/bert-small-juman-unigram | 11 | null | transformers | 11,437 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-small-juman-unigram
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-juman-unigram
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4490
- Accuracy: 0.6911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- total_train_batch_size: 768
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 14
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.9849 | 1.0 | 69472 | 1.8385 | 0.6286 |
| 1.8444 | 2.0 | 138944 | 1.6912 | 0.6513 |
| 1.7767 | 3.0 | 208416 | 1.6322 | 0.6610 |
| 1.7357 | 4.0 | 277888 | 1.5931 | 0.6676 |
| 1.709 | 5.0 | 347360 | 1.5636 | 0.6719 |
| 1.6874 | 6.0 | 416832 | 1.5405 | 0.6756 |
| 1.6707 | 7.0 | 486304 | 1.5221 | 0.6786 |
| 1.6511 | 8.0 | 555776 | 1.5061 | 0.6817 |
| 1.636 | 9.0 | 625248 | 1.4933 | 0.6837 |
| 1.6295 | 10.0 | 694720 | 1.4784 | 0.6860 |
| 1.6157 | 11.0 | 764192 | 1.4673 | 0.6879 |
| 1.6027 | 12.0 | 833664 | 1.4605 | 0.6896 |
| 1.5942 | 13.0 | 903136 | 1.4535 | 0.6904 |
| 1.5866 | 14.0 | 972608 | 1.4490 | 0.6911 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.12.0+cu116
- Datasets 2.2.2
- Tokenizers 0.12.1
|
oMateos2020/XSum_t5-small_800_adafactor | 3b938dce68b7d851cdc99a83dfea8baaaeb9f0a9 | 2022-07-24T20:29:25.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | oMateos2020 | null | oMateos2020/XSum_t5-small_800_adafactor | 11 | null | transformers | 11,438 | ---
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: XSum_t5-small_800_adafactor
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 33.022
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XSum_t5-small_800_adafactor
This model is a fine-tuned version of [/content/XSum_t5-small_800_adafactor/checkpoint-11000](https://huggingface.co//content/XSum_t5-small_800_adafactor/checkpoint-11000) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1714
- Rouge1: 33.022
- Rouge2: 11.9979
- Rougel: 26.7476
- Rougelsum: 26.7402
- Gen Len: 18.7543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 25
- eval_batch_size: 25
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.3404 | 0.01 | 100 | 2.2058 | 32.4826 | 11.5807 | 26.2716 | 26.2611 | 18.7842 |
| 2.3194 | 0.02 | 200 | 2.2028 | 32.6393 | 11.661 | 26.372 | 26.3643 | 18.788 |
| 2.3247 | 0.04 | 300 | 2.1999 | 32.6792 | 11.6985 | 26.3876 | 26.3786 | 18.7354 |
| 2.3276 | 0.05 | 400 | 2.1979 | 32.6668 | 11.7272 | 26.3964 | 26.3907 | 18.7957 |
| 2.317 | 0.06 | 500 | 2.1957 | 32.8267 | 11.8165 | 26.5075 | 26.4997 | 18.7543 |
| 2.3214 | 0.07 | 600 | 2.1942 | 32.8319 | 11.8064 | 26.5428 | 26.5448 | 18.7693 |
| 2.3014 | 0.09 | 700 | 2.1931 | 32.7136 | 11.7334 | 26.4958 | 26.486 | 18.7759 |
| 2.3294 | 0.1 | 800 | 2.1902 | 32.6818 | 11.7684 | 26.4314 | 26.4242 | 18.785 |
| 2.299 | 0.11 | 900 | 2.1914 | 32.672 | 11.7606 | 26.4475 | 26.4367 | 18.7853 |
| 2.3009 | 0.12 | 1000 | 2.1900 | 32.7816 | 11.7958 | 26.5167 | 26.5099 | 18.7685 |
| 2.2913 | 0.13 | 1100 | 2.1885 | 32.6438 | 11.7398 | 26.4077 | 26.4051 | 18.7742 |
| 2.293 | 0.15 | 1200 | 2.1854 | 32.8228 | 11.841 | 26.548 | 26.5415 | 18.7899 |
| 2.2857 | 0.16 | 1300 | 2.1853 | 32.7118 | 11.7439 | 26.4989 | 26.4941 | 18.7998 |
| 2.2921 | 0.17 | 1400 | 2.1832 | 32.6705 | 11.7333 | 26.4076 | 26.4082 | 18.8017 |
| 2.3074 | 0.18 | 1500 | 2.1827 | 32.7543 | 11.7787 | 26.4904 | 26.4923 | 18.7827 |
| 2.3044 | 0.2 | 1600 | 2.1806 | 32.8573 | 11.8672 | 26.5655 | 26.5619 | 18.8097 |
| 2.2922 | 0.21 | 1700 | 2.1819 | 32.8394 | 11.8158 | 26.5523 | 26.5467 | 18.7891 |
| 2.2901 | 0.22 | 1800 | 2.1803 | 32.7219 | 11.7493 | 26.4644 | 26.4572 | 18.7882 |
| 2.286 | 0.23 | 1900 | 2.1790 | 32.7474 | 11.852 | 26.5078 | 26.5014 | 18.7699 |
| 2.298 | 0.25 | 2000 | 2.1781 | 32.8662 | 11.8878 | 26.618 | 26.6174 | 18.7979 |
| 2.2787 | 0.26 | 2100 | 2.1775 | 32.9621 | 11.9521 | 26.6955 | 26.6914 | 18.7934 |
| 2.2823 | 0.27 | 2200 | 2.1777 | 33.0633 | 12.0622 | 26.7715 | 26.7597 | 18.7954 |
| 2.2889 | 0.28 | 2300 | 2.1742 | 32.9637 | 12.0154 | 26.6771 | 26.6721 | 18.7844 |
| 2.2847 | 0.29 | 2400 | 2.1774 | 32.7435 | 11.8869 | 26.5334 | 26.5306 | 18.756 |
| 2.2923 | 0.31 | 2500 | 2.1754 | 32.8437 | 11.8977 | 26.59 | 26.587 | 18.7964 |
| 2.2877 | 0.32 | 2600 | 2.1740 | 32.9137 | 11.9267 | 26.618 | 26.6046 | 18.7678 |
| 2.2976 | 0.33 | 2700 | 2.1728 | 32.9372 | 11.9048 | 26.6412 | 26.6345 | 18.7838 |
| 2.2935 | 0.34 | 2800 | 2.1719 | 32.7338 | 11.7836 | 26.5667 | 26.5629 | 18.7659 |
| 2.2622 | 0.36 | 2900 | 2.1718 | 32.9847 | 11.978 | 26.7093 | 26.7008 | 18.7627 |
| 2.2749 | 0.37 | 3000 | 2.1710 | 32.9835 | 11.9809 | 26.7034 | 26.6946 | 18.8016 |
| 2.2615 | 0.38 | 3100 | 2.1721 | 32.9343 | 11.9317 | 26.6752 | 26.6695 | 18.7689 |
| 2.2825 | 0.39 | 3200 | 2.1714 | 33.022 | 11.9979 | 26.7476 | 26.7402 | 18.7543 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sorayutmild/simcse-model-wangchanberta-finetuned-sanook-news | 3bfd230099658fbe8d23ce1ebf555e90063f9e94 | 2022-07-24T11:37:48.000Z | [
"pytorch",
"camembert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | sorayutmild | null | sorayutmild/simcse-model-wangchanberta-finetuned-sanook-news | 11 | null | sentence-transformers | 11,439 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sorayutmild/simcse-model-wangchanberta-finetuned-sanook-news
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sorayutmild/simcse-model-wangchanberta-finetuned-sanook-news')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sorayutmild/simcse-model-wangchanberta-finetuned-sanook-news')
model = AutoModel.from_pretrained('sorayutmild/simcse-model-wangchanberta-finetuned-sanook-news')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sorayutmild/simcse-model-wangchanberta-finetuned-sanook-news)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2542 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: CamembertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
tk648/distilbert-base-uncased-finetuned-emotion | 12831b91a99786e1a2394c26ea8151ac0ab70a42 | 2022-07-26T13:24:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | tk648 | null | tk648/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,440 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9216622110265926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2221
- Accuracy: 0.9215
- F1: 0.9217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8339 | 1.0 | 250 | 0.3233 | 0.903 | 0.9009 |
| 0.2517 | 2.0 | 500 | 0.2221 | 0.9215 | 0.9217 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
lylyly2103/distilbert-base-uncased-finetuned-emotion | 15ec01529b0d9d0d839f4de169af1e145c457c05 | 2022-07-26T02:31:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | lylyly2103 | null | lylyly2103/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,441 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9228979767868367
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2195
- Accuracy: 0.923
- F1: 0.9229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8485 | 1.0 | 250 | 0.3197 | 0.908 | 0.9064 |
| 0.2542 | 2.0 | 500 | 0.2195 | 0.923 | 0.9229 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cpu
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jinwooChoi/SKKU_AP_SA_KBT5 | 50f3eaba1a2cd77df0a1cd262a6d1cbb41759431 | 2022-07-26T01:14:32.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_AP_SA_KBT5 | 11 | null | transformers | 11,442 | Entry not found |
sdadas/st-polish-paraphrase-from-distilroberta | c235be9d17e5c1349b930172e00ce75145ea1fd0 | 2022-07-25T19:26:04.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | sdadas | null | sdadas/st-polish-paraphrase-from-distilroberta | 11 | null | sentence-transformers | 11,443 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sdadas/st-polish-paraphrase-from-distilroberta
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sdadas/st-polish-paraphrase-from-distilroberta')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sdadas/st-polish-paraphrase-from-distilroberta')
model = AutoModel.from_pretrained('sdadas/st-polish-paraphrase-from-distilroberta')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sdadas/st-polish-paraphrase-from-distilroberta)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
olemeyer/zero_shot_issue_classification_bart-large-32 | 662d1f6b9219d160a861569a8f76610b34c8682f | 2022-07-27T18:08:23.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | olemeyer | null | olemeyer/zero_shot_issue_classification_bart-large-32 | 11 | null | transformers | 11,444 | Entry not found |
alfredcs/vit-cifar10 | 1d84e5adede62b8b80fa435d30c60f869b64798e | 2022-07-26T21:16:44.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers",
"license:gpl"
]
| image-classification | false | alfredcs | null | alfredcs/vit-cifar10 | 11 | null | transformers | 11,445 | ---
license: gpl
---
|
mlegls/codeparrot-ds | 87cd653f48c4e40af4d85fa9f9a72c2953392147 | 2022-07-27T06:02:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | mlegls | null | mlegls/codeparrot-ds | 11 | null | transformers | 11,446 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.0659 | 0.34 | 5000 | 3.9176 |
| 1.8404 | 0.67 | 10000 | 3.7958 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sguskin/dynamic-minilmv2-L6-H384-squad1.1 | 4af89447f14282450f08d7d118d155af4164d120 | 2022-07-28T12:23:26.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | sguskin | null | sguskin/dynamic-minilmv2-L6-H384-squad1.1 | 11 | null | transformers | 11,447 | Entry not found |
ALINEAR/albert-japanese | 4d6f06119616ab7e926a1cc311996a6e456f76b0 | 2020-04-24T16:08:41.000Z | [
"pytorch",
"albert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | ALINEAR | null | ALINEAR/albert-japanese | 10 | null | transformers | 11,448 | Entry not found |
CenIA/albert-large-spanish-finetuned-pawsx | 0cf25e8ceb32c7308541cc4f54778c59cfad535f | 2022-01-02T00:36:34.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/albert-large-spanish-finetuned-pawsx | 10 | null | transformers | 11,449 | Entry not found |
CleveGreen/JobClassifier_v2 | fb946e82a7fce6a43827fba082f55fee53a7fb57 | 2022-02-04T17:41:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | CleveGreen | null | CleveGreen/JobClassifier_v2 | 10 | 1 | transformers | 11,450 | Entry not found |
Contrastive-Tension/BERT-Distil-NLI-CT | ba7bdb61ef132d823cab4936a486516337d6a154 | 2021-02-10T19:24:22.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Contrastive-Tension | null | Contrastive-Tension/BERT-Distil-NLI-CT | 10 | null | transformers | 11,451 | Entry not found |
DJSammy/bert-base-danish-uncased_BotXO-ai | 2343a9140c4ae7102747360e89e514ae64f7cfbb | 2021-05-19T11:13:30.000Z | [
"pytorch",
"jax",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"transformers",
"bert",
"masked-lm",
"license:cc-by-4.0",
"fill-mask"
]
| fill-mask | false | DJSammy | null | DJSammy/bert-base-danish-uncased_BotXO-ai | 10 | 1 | transformers | 11,452 | ---
language: da
tags:
- bert
- masked-lm
license: cc-by-4.0
datasets:
- common_crawl
- wikipedia
pipeline_tag: fill-mask
widget:
- text: "København er [MASK] i Danmark."
---
# Danish BERT (uncased) model
[BotXO.ai](https://www.botxo.ai/) developed this model. For data and training details see their [GitHub repository](https://github.com/botxo/nordic_bert).
The original model was trained in TensorFlow then I converted it to Pytorch using [transformers-cli](https://huggingface.co/transformers/converting_tensorflow_models.html?highlight=cli).
For TensorFlow version download here: https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1
## Architecture
```python
from transformers import AutoModelForPreTraining
model = AutoModelForPreTraining.from_pretrained("DJSammy/bert-base-danish-uncased_BotXO,ai")
params = list(model.named_parameters())
print('danish_bert_uncased_v2 has {:} different named parameters.\n'.format(len(params)))
print('==== Embedding Layer ====\n')
for p in params[0:5]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== First Transformer ====\n')
for p in params[5:21]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== Last Transformer ====\n')
for p in params[181:197]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== Output Layer ====\n')
for p in params[197:]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
# danish_bert_uncased_v2 has 206 different named parameters.
# ==== Embedding Layer ====
# bert.embeddings.word_embeddings.weight (32000, 768)
# bert.embeddings.position_embeddings.weight (512, 768)
# bert.embeddings.token_type_embeddings.weight (2, 768)
# bert.embeddings.LayerNorm.weight (768,)
# bert.embeddings.LayerNorm.bias (768,)
# ==== First Transformer ====
# bert.encoder.layer.0.attention.self.query.weight (768, 768)
# bert.encoder.layer.0.attention.self.query.bias (768,)
# bert.encoder.layer.0.attention.self.key.weight (768, 768)
# bert.encoder.layer.0.attention.self.key.bias (768,)
# bert.encoder.layer.0.attention.self.value.weight (768, 768)
# bert.encoder.layer.0.attention.self.value.bias (768,)
# bert.encoder.layer.0.attention.output.dense.weight (768, 768)
# bert.encoder.layer.0.attention.output.dense.bias (768,)
# bert.encoder.layer.0.attention.output.LayerNorm.weight (768,)
# bert.encoder.layer.0.attention.output.LayerNorm.bias (768,)
# bert.encoder.layer.0.intermediate.dense.weight (3072, 768)
# bert.encoder.layer.0.intermediate.dense.bias (3072,)
# bert.encoder.layer.0.output.dense.weight (768, 3072)
# bert.encoder.layer.0.output.dense.bias (768,)
# bert.encoder.layer.0.output.LayerNorm.weight (768,)
# bert.encoder.layer.0.output.LayerNorm.bias (768,)
# ==== Last Transformer ====
# bert.encoder.layer.11.attention.self.query.weight (768, 768)
# bert.encoder.layer.11.attention.self.query.bias (768,)
# bert.encoder.layer.11.attention.self.key.weight (768, 768)
# bert.encoder.layer.11.attention.self.key.bias (768,)
# bert.encoder.layer.11.attention.self.value.weight (768, 768)
# bert.encoder.layer.11.attention.self.value.bias (768,)
# bert.encoder.layer.11.attention.output.dense.weight (768, 768)
# bert.encoder.layer.11.attention.output.dense.bias (768,)
# bert.encoder.layer.11.attention.output.LayerNorm.weight (768,)
# bert.encoder.layer.11.attention.output.LayerNorm.bias (768,)
# bert.encoder.layer.11.intermediate.dense.weight (3072, 768)
# bert.encoder.layer.11.intermediate.dense.bias (3072,)
# bert.encoder.layer.11.output.dense.weight (768, 3072)
# bert.encoder.layer.11.output.dense.bias (768,)
# bert.encoder.layer.11.output.LayerNorm.weight (768,)
# bert.encoder.layer.11.output.LayerNorm.bias (768,)
# ==== Output Layer ====
# bert.pooler.dense.weight (768, 768)
# bert.pooler.dense.bias (768,)
# cls.predictions.bias (32000,)
# cls.predictions.transform.dense.weight (768, 768)
# cls.predictions.transform.dense.bias (768,)
# cls.predictions.transform.LayerNorm.weight (768,)
# cls.predictions.transform.LayerNorm.bias (768,)
# cls.seq_relationship.weight (2, 768)
# cls.seq_relationship.bias (2,)
```
## Example Pipeline
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='DJSammy/bert-base-danish-uncased_BotXO,ai')
unmasker('København er [MASK] i Danmark.')
# Copenhagen is the [MASK] of Denmark.
# =>
# [{'score': 0.788068950176239,
# 'sequence': '[CLS] københavn er hovedstad i danmark. [SEP]',
# 'token': 12610,
# 'token_str': 'hovedstad'},
# {'score': 0.07606703042984009,
# 'sequence': '[CLS] københavn er hovedstaden i danmark. [SEP]',
# 'token': 8108,
# 'token_str': 'hovedstaden'},
# {'score': 0.04299738258123398,
# 'sequence': '[CLS] københavn er metropol i danmark. [SEP]',
# 'token': 23305,
# 'token_str': 'metropol'},
# {'score': 0.008163209073245525,
# 'sequence': '[CLS] københavn er ikke i danmark. [SEP]',
# 'token': 89,
# 'token_str': 'ikke'},
# {'score': 0.006238455418497324,
# 'sequence': '[CLS] københavn er ogsa i danmark. [SEP]',
# 'token': 25253,
# 'token_str': 'ogsa'}]
```
|
DataikuNLP/camembert-base | a7148f9c509892b5eabab292b48e73c694fb3bea | 2021-09-02T08:15:08.000Z | [
"pytorch",
"tf",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | DataikuNLP | null | DataikuNLP/camembert-base | 10 | null | transformers | 11,453 | ---
language: fr
license: mit
datasets:
- oscar
---
# CamemBERT: a Tasty French Language Model
**This model is a copy of [this model repository](https://huggingface.co/camembert-base) at the specific commit `482393b6198924f9da270b1aaf37d238aafca99b`.**
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert-base")
camembert = CamembertModel.from_pretrained("camembert-base")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert-base", tokenizer="camembert-base")
results = camembert_fill_mask("Le camembert est <mask> :)")
# results
#[{'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.4909103214740753, 'token': 7200},
# {'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.10556930303573608, 'token': 2183},
# {'sequence': '<s> Le camembert est succulent :)</s>', 'score': 0.03453315049409866, 'token': 26202},
# {'sequence': '<s> Le camembert est meilleur :)</s>', 'score': 0.03303130343556404, 'token': 528},
# {'sequence': '<s> Le camembert est parfait :)</s>', 'score': 0.030076518654823303, 'token': 1654}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 121, 11, 660, 16, 730, 25543, 110, 83, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
# tensor([[[-0.0254, 0.0235, 0.1027, ..., -0.1459, -0.0205, -0.0116],
# [ 0.0606, -0.1811, -0.0418, ..., -0.1815, 0.0880, -0.0766],
# [-0.1561, -0.1127, 0.2687, ..., -0.0648, 0.0249, 0.0446],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert-base", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert-base", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[-0.0032, 0.0075, 0.0040, ..., -0.0025, -0.0178, -0.0210],
# [-0.0996, -0.1474, 0.1057, ..., -0.0278, 0.1690, -0.2982],
# [ 0.0557, -0.0588, 0.0547, ..., -0.0726, -0.0867, 0.0699],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
Davlan/bert-base-multilingual-cased-finetuned-igbo | 0ff39b570382782e4135d6e9da0dfb01f2a6b7d2 | 2021-06-06T14:14:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"ig",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Davlan | null | Davlan/bert-base-multilingual-cased-finetuned-igbo | 10 | null | transformers | 11,454 | Hugging Face's logo
---
language: ig
datasets:
---
# bert-base-multilingual-cased-finetuned-igbo
## Model description
**bert-base-multilingual-cased-finetuned-igbo** is a **Igbo BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Igbo language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Igbo corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-igbo')
>>> unmasker("Reno Omokri na Gọọmentị [MASK] enweghị ihe ha ga-eji hiwe ya bụ mmachi.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + OPUS CC-Align + [IGBO NLP Corpus](https://github.com/IgnatiusEzeani/IGBONLP) +[Igbo CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | ig_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 85.11 | 86.75
### BibTeX entry and citation info
By David Adelani
```
```
|
Devmapall/paraphrase-quora | a3a8d354f1c1ed3e28652eb8eed44a119f50db0e | 2021-06-23T02:22:56.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Devmapall | null | Devmapall/paraphrase-quora | 10 | null | transformers | 11,455 | Entry not found |
DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro | b78f82a401a393a2cc8178a6c0828a969ebe1cd1 | 2021-11-25T03:08:55.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | DiegoAlysson | null | DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro | 10 | null | transformers | 11,456 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 27.9273
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2915
- Bleu: 27.9273
- Gen Len: 34.0935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7448 | 1.0 | 38145 | 1.2915 | 27.9273 | 34.0935 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
EMBEDDIA/english-tweetsentiment | 8d35866b5cd2e7688ed4c810b445a0b7a92b6934 | 2021-07-09T14:39:01.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | EMBEDDIA | null | EMBEDDIA/english-tweetsentiment | 10 | null | transformers | 11,457 | Entry not found |
EhsanAghazadeh/bert-base-uncased-random-weights | cf9360a12a84d4eda7c956468a78d4330ba4d161 | 2021-09-04T20:23:37.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | EhsanAghazadeh | null | EhsanAghazadeh/bert-base-uncased-random-weights | 10 | null | transformers | 11,458 | Entry not found |
EhsanAghazadeh/melbert-roberta | e435e5b53452ea9dc087c31127a95a547a0a408f | 2021-08-07T15:02:57.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | EhsanAghazadeh | null | EhsanAghazadeh/melbert-roberta | 10 | null | transformers | 11,459 | Entry not found |
EhsanAghazadeh/xlm-roberta-base-lcc-en-fa-2e-5-42 | 1aa27bdbb24eac75c24b5e82f8972f5bd76bd7de | 2021-08-21T22:14:58.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/xlm-roberta-base-lcc-en-fa-2e-5-42 | 10 | null | transformers | 11,460 | Entry not found |
Erfan/mT5-base_Farsi_Title_Generator | ed8751a5c39c542be15e48e9fdc3b499b0ab77ba | 2022-01-30T18:00:42.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fa",
"transformers",
"Title-Generation",
"autotrain_compatible"
]
| text2text-generation | false | Erfan | null | Erfan/mT5-base_Farsi_Title_Generator | 10 | 1 | transformers | 11,461 | ---
language:
- fa
tags:
- Title-Generation
metrics:
- ROUGH
---
|
FailedExperiment/DialoGPT-small-techno | 4166db21e83b0d8ad1b06f564d8d980fc84cfaf6 | 2021-09-12T00:31:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | FailedExperiment | null | FailedExperiment/DialoGPT-small-techno | 10 | null | transformers | 11,462 | ---
tags:
- conversational
---
# I attempted to train a bot from technoblade quotes |
Fawreez/DialoGPT-small-Fawreez | 1a18c6aa696dd9dfdfa87f40f54fc0cf73d5a79b | 2022-01-05T10:31:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Fawreez | null | Fawreez/DialoGPT-small-Fawreez | 10 | null | transformers | 11,463 | ---
tags:
- conversational
---
# Fawreez DialoGPT Model |
Geotrend/bert-base-el-cased | 2850bfab9e23a04c0a9d3dc5ab6e83ad9300a929 | 2021-05-18T19:00:19.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"el",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/bert-base-el-cased | 10 | null | transformers | 11,464 | ---
language: el
datasets: wikipedia
license: apache-2.0
---
# bert-base-el-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-el-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-el-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Geotrend/distilbert-base-th-cased | a0dab2abd2cef3a66d8a376fa807267f755a82e1 | 2021-08-16T13:22:19.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"th",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/distilbert-base-th-cased | 10 | null | transformers | 11,465 | ---
language: th
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-th-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-th-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-th-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
GroNLP/bert-base-dutch-cased-upos-alpino-gronings | 23cbb0bd4c9079502da5fec47c2c759fc4e3a3d2 | 2021-05-18T20:23:32.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"gos",
"arxiv:2105.02855",
"transformers",
"BERTje",
"pos",
"autotrain_compatible"
]
| token-classification | false | GroNLP | null | GroNLP/bert-base-dutch-cased-upos-alpino-gronings | 10 | null | transformers | 11,466 | ---
language: gos
tags:
- BERTje
- pos
---
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
|
Helsinki-NLP/opus-mt-ar-he | 5627d7c8847b42c06208aec731adacde294cd722 | 2021-01-18T07:47:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ar",
"he",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ar-he | 10 | null | transformers | 11,467 | ---
language:
- ar
- he
tags:
- translation
license: apache-2.0
---
### ara-heb
* source group: Arabic
* target group: Hebrew
* OPUS readme: [ara-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-heb/README.md)
* model: transformer
* source language(s): apc apc_Latn ara arq arz
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.heb | 40.4 | 0.605 |
### System Info:
- hf_name: ara-heb
- source_languages: ara
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'he']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: heb
- short_pair: ar-he
- chrF2_score: 0.605
- bleu: 40.4
- brevity_penalty: 1.0
- ref_len: 6801.0
- src_name: Arabic
- tgt_name: Hebrew
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: he
- prefer_old: False
- long_pair: ara-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ase-es | ad7baf4c85668f267ff41ec254dbbbf87e2937a4 | 2021-09-09T21:26:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ase",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ase-es | 10 | null | transformers | 11,468 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ase-es
* source languages: ase
* target languages: es
* OPUS readme: [ase-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.es | 31.7 | 0.498 |
|
Helsinki-NLP/opus-mt-bcl-fi | c06e553091742a39e0975d15020a3d387d78169d | 2021-09-09T21:26:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bcl",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bcl-fi | 10 | null | transformers | 11,469 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bcl-fi
* source languages: bcl
* target languages: fi
* OPUS readme: [bcl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.fi | 33.3 | 0.573 |
|
Helsinki-NLP/opus-mt-da-fi | a2e614cb32e2b0fa09c5c1dcaba8122d9d647b18 | 2021-09-09T21:29:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"da",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-da-fi | 10 | null | transformers | 11,470 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-da-fi
* source languages: da
* target languages: fi
* OPUS readme: [da-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/da-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.da.fi | 39.0 | 0.629 |
|
Helsinki-NLP/opus-mt-de-no | 3235e8257dd83959ea52390eccc6d72adaf71a0f | 2021-01-18T08:01:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"no",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-no | 10 | null | transformers | 11,471 | ---
language:
- de
- no
tags:
- translation
license: apache-2.0
---
### deu-nor
* source group: German
* target group: Norwegian
* OPUS readme: [deu-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-nor/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.nor | 33.2 | 0.554 |
### System Info:
- hf_name: deu-nor
- source_languages: deu
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'no']
- src_constituents: {'deu'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.test.txt
- src_alpha3: deu
- tgt_alpha3: nor
- short_pair: de-no
- chrF2_score: 0.5539999999999999
- bleu: 33.2
- brevity_penalty: 0.956
- ref_len: 32928.0
- src_name: German
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: de
- tgt_alpha2: no
- prefer_old: False
- long_pair: deu-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-de-pon | d18f29c5ef79abbca40d53e34b94c8514ffd6235 | 2021-09-09T21:33:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"pon",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-pon | 10 | null | transformers | 11,472 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-pon
* source languages: de
* target languages: pon
* OPUS readme: [de-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pon/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pon/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pon/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.pon | 21.0 | 0.442 |
|
Helsinki-NLP/opus-mt-en-alv | f46d3736bdbe35df72cce38c8885d75dcf4c01f9 | 2021-01-18T08:04:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"alv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-alv | 10 | null | transformers | 11,473 | ---
language:
- en
- sn
- rw
- wo
- ig
- sg
- ee
- zu
- lg
- ts
- ln
- ny
- yo
- rn
- xh
- alv
tags:
- translation
license: apache-2.0
---
### eng-alv
* source group: English
* target group: Atlantic-Congo languages
* OPUS readme: [eng-alv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-alv/README.md)
* model: transformer
* source language(s): eng
* target language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-ewe.eng.ewe | 4.9 | 0.212 |
| Tatoeba-test.eng-ful.eng.ful | 0.6 | 0.079 |
| Tatoeba-test.eng-ibo.eng.ibo | 3.5 | 0.255 |
| Tatoeba-test.eng-kin.eng.kin | 10.5 | 0.510 |
| Tatoeba-test.eng-lin.eng.lin | 1.1 | 0.273 |
| Tatoeba-test.eng-lug.eng.lug | 5.3 | 0.340 |
| Tatoeba-test.eng.multi | 11.4 | 0.429 |
| Tatoeba-test.eng-nya.eng.nya | 18.1 | 0.595 |
| Tatoeba-test.eng-run.eng.run | 13.9 | 0.484 |
| Tatoeba-test.eng-sag.eng.sag | 5.3 | 0.194 |
| Tatoeba-test.eng-sna.eng.sna | 26.2 | 0.623 |
| Tatoeba-test.eng-swa.eng.swa | 1.0 | 0.141 |
| Tatoeba-test.eng-toi.eng.toi | 7.0 | 0.224 |
| Tatoeba-test.eng-tso.eng.tso | 46.7 | 0.643 |
| Tatoeba-test.eng-umb.eng.umb | 7.8 | 0.359 |
| Tatoeba-test.eng-wol.eng.wol | 6.8 | 0.191 |
| Tatoeba-test.eng-xho.eng.xho | 27.1 | 0.629 |
| Tatoeba-test.eng-yor.eng.yor | 17.4 | 0.356 |
| Tatoeba-test.eng-zul.eng.zul | 34.1 | 0.729 |
### System Info:
- hf_name: eng-alv
- source_languages: eng
- target_languages: alv
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-alv/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv']
- src_constituents: {'eng'}
- tgt_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: alv
- short_pair: en-alv
- chrF2_score: 0.429
- bleu: 11.4
- brevity_penalty: 1.0
- ref_len: 10603.0
- src_name: English
- tgt_name: Atlantic-Congo languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: alv
- prefer_old: False
- long_pair: eng-alv
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-bem | 7d0c704d934f400158d645345a7ed27c6cfe73e8 | 2021-09-09T21:34:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"bem",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-bem | 10 | null | transformers | 11,474 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-bem
* source languages: en
* target languages: bem
* OPUS readme: [en-bem](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bem/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.bem | 29.7 | 0.532 |
|
Helsinki-NLP/opus-mt-en-gil | 804a2271bd5e9e694df05b4090ff1d02f1ff4bb8 | 2021-09-09T21:35:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"gil",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-gil | 10 | null | transformers | 11,475 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-gil
* source languages: en
* target languages: gil
* OPUS readme: [en-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.gil | 38.8 | 0.604 |
|
Helsinki-NLP/opus-mt-en-om | ac843ee5ccb86fa2f97e5f19c24c0606cf83fcf3 | 2021-09-09T21:38:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"om",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-om | 10 | null | transformers | 11,476 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-om
* source languages: en
* target languages: om
* OPUS readme: [en-om](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-om/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-om/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-om/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-om/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.om | 21.8 | 0.498 |
|
Helsinki-NLP/opus-mt-en-tdt | db62637f3dc8e041add32fff627b6adfffccd750 | 2021-09-09T21:39:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"tdt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-tdt | 10 | null | transformers | 11,477 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-tdt
* source languages: en
* target languages: tdt
* OPUS readme: [en-tdt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tdt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tdt/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tdt/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tdt/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tdt | 23.8 | 0.416 |
|
Helsinki-NLP/opus-mt-en-tll | b3b42343ec7a23255d585889e1a283a10c261df7 | 2021-09-09T21:39:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"tll",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-tll | 10 | null | transformers | 11,478 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-tll
* source languages: en
* target languages: tll
* OPUS readme: [en-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tll/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tll/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tll/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tll | 33.6 | 0.556 |
|
Helsinki-NLP/opus-mt-eo-sv | 8d96c0ffefce2a39a725c4235a09abad9410518c | 2021-01-18T08:21:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eo",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-eo-sv | 10 | null | transformers | 11,479 | ---
language:
- eo
- sv
tags:
- translation
license: apache-2.0
---
### epo-swe
* source group: Esperanto
* target group: Swedish
* OPUS readme: [epo-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-swe/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.swe | 29.5 | 0.463 |
### System Info:
- hf_name: epo-swe
- source_languages: epo
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'sv']
- src_constituents: {'epo'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: swe
- short_pair: eo-sv
- chrF2_score: 0.46299999999999997
- bleu: 29.5
- brevity_penalty: 0.9640000000000001
- ref_len: 10977.0
- src_name: Esperanto
- tgt_name: Swedish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: sv
- prefer_old: False
- long_pair: epo-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-bcl | 63e6ec974669bbcfe5d9fda2cc96b6823197df96 | 2021-09-09T21:41:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"bcl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-bcl | 10 | null | transformers | 11,480 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-bcl
* source languages: es
* target languages: bcl
* OPUS readme: [es-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-bcl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-bcl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bcl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bcl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.bcl | 37.1 | 0.586 |
|
Helsinki-NLP/opus-mt-es-bzs | 10408e1265429036ad50b799a2ed300a695436b1 | 2021-09-09T21:41:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"bzs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-bzs | 10 | null | transformers | 11,481 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-bzs
* source languages: es
* target languages: bzs
* OPUS readme: [es-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.bzs | 26.4 | 0.451 |
|
Helsinki-NLP/opus-mt-es-crs | 7cc1139172ef822a3e979575f39dbbe18023fe60 | 2021-09-09T21:41:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"crs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-crs | 10 | null | transformers | 11,482 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-crs
* source languages: es
* target languages: crs
* OPUS readme: [es-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-crs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-crs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-crs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-crs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.crs | 26.4 | 0.453 |
|
Helsinki-NLP/opus-mt-es-kg | 47921925586264e72fc61927479342c0936d0099 | 2021-09-09T21:43:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"kg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-kg | 10 | null | transformers | 11,483 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-kg
* source languages: es
* target languages: kg
* OPUS readme: [es-kg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-kg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-kg/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-kg/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-kg/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.kg | 25.6 | 0.488 |
|
Helsinki-NLP/opus-mt-es-niu | 45c4c19cb19c35c6ecf1ac2b7840e6b75f900d91 | 2021-09-09T21:43:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"niu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-niu | 10 | null | transformers | 11,484 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-niu
* source languages: es
* target languages: niu
* OPUS readme: [es-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-niu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-niu/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-niu/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-niu/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.niu | 29.9 | 0.506 |
|
Helsinki-NLP/opus-mt-es-sn | e05ee00f47b9f408e3efc640974e77c7ee1891b5 | 2021-09-09T21:44:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"sn",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-sn | 10 | null | transformers | 11,485 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-sn
* source languages: es
* target languages: sn
* OPUS readme: [es-sn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-sn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-sn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-sn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-sn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.sn | 23.6 | 0.528 |
|
Helsinki-NLP/opus-mt-es-zai | bfe052f3b3984410bf0497780fba8546212944af | 2021-09-09T21:45:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"zai",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-zai | 10 | null | transformers | 11,486 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-zai
* source languages: es
* target languages: zai
* OPUS readme: [es-zai](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-zai/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-zai/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-zai/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-zai/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.zai | 20.8 | 0.426 |
|
Helsinki-NLP/opus-mt-fi-ee | 1bcb147a08fb78212e65eb8d40359386214f324f | 2021-09-09T21:47:13.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"ee",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-ee | 10 | null | transformers | 11,487 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-ee
* source languages: fi
* target languages: ee
* OPUS readme: [fi-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ee/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ee/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ee/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ee/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.ee | 28.0 | 0.500 |
|
Helsinki-NLP/opus-mt-fi-fj | 0d2fcaa176a17c553e78e2923d8d7b15aef43066 | 2021-09-09T21:47:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"fj",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-fj | 10 | null | transformers | 11,488 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-fj
* source languages: fi
* target languages: fj
* OPUS readme: [fi-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-fj/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-fj/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-fj/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.fj | 26.6 | 0.500 |
|
Helsinki-NLP/opus-mt-fi-hu | a5af86a3ae2fcb7d863b5efe7d2a53a70c2b253c | 2021-09-09T21:48:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"hu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-hu | 10 | null | transformers | 11,489 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-hu
* source languages: fi
* target languages: hu
* OPUS readme: [fi-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-hu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-hu/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-hu/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-hu/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fi.hu | 50.4 | 0.705 |
|
Helsinki-NLP/opus-mt-fi-id | c02df96edd972d2f525b0d264afa35d2abafb136 | 2021-09-09T21:48:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"id",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-id | 10 | null | transformers | 11,490 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-id
* source languages: fi
* target languages: id
* OPUS readme: [fi-id](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-id/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-id/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-id/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-id/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.id | 33.8 | 0.565 |
|
Helsinki-NLP/opus-mt-fi-no | 299722aac1f5870b2dd952b850d194b06d5fe8dd | 2021-01-18T08:36:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"no",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-no | 10 | null | transformers | 11,491 | ---
language:
- fi
- no
tags:
- translation
license: apache-2.0
---
### fin-nor
* source group: Finnish
* target group: Norwegian
* OPUS readme: [fin-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-nor/README.md)
* model: transformer-align
* source language(s): fin
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fin.nor | 23.5 | 0.426 |
### System Info:
- hf_name: fin-nor
- source_languages: fin
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fi', 'no']
- src_constituents: {'fin'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-nor/opus-2020-06-17.test.txt
- src_alpha3: fin
- tgt_alpha3: nor
- short_pair: fi-no
- chrF2_score: 0.426
- bleu: 23.5
- brevity_penalty: 1.0
- ref_len: 14768.0
- src_name: Finnish
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: fi
- tgt_alpha2: no
- prefer_old: False
- long_pair: fin-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fi-pag | 37db2e0421a090951c2621f94c01b8eec4e52af8 | 2021-09-09T21:50:06.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"pag",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-pag | 10 | null | transformers | 11,492 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-pag
* source languages: fi
* target languages: pag
* OPUS readme: [fi-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-pag/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pag/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pag/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.pag | 28.0 | 0.510 |
|
Helsinki-NLP/opus-mt-fi-pis | 8f3ea25a0b1bf93468e709e1bf519396e7f4b70b | 2021-09-09T21:50:13.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"pis",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-pis | 10 | null | transformers | 11,493 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-pis
* source languages: fi
* target languages: pis
* OPUS readme: [fi-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-pis/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-pis/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pis/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pis/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.pis | 27.5 | 0.493 |
|
Helsinki-NLP/opus-mt-fi-sw | 5608edb882b8df46608106632ff1baa1e991dd98 | 2021-09-09T21:51:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"sw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-sw | 10 | null | transformers | 11,494 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-sw
* source languages: fi
* target languages: sw
* OPUS readme: [fi-sw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.sw | 29.9 | 0.548 |
|
Helsinki-NLP/opus-mt-fi-war | d30a2fdbbbda8ada452c502b53386b98eeaaf183 | 2021-09-09T21:52:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"war",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-war | 10 | null | transformers | 11,495 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-war
* source languages: fi
* target languages: war
* OPUS readme: [fi-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-war/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-war/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-war/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-war/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.war | 35.1 | 0.565 |
|
Helsinki-NLP/opus-mt-fi-yap | 4f4a2f2e92e3c4db4fe1686f8d55fa4b76a05847 | 2021-09-09T21:52:22.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"yap",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-yap | 10 | null | transformers | 11,496 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-yap
* source languages: fi
* target languages: yap
* OPUS readme: [fi-yap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-yap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-yap/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-yap/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-yap/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.yap | 25.4 | 0.445 |
|
Helsinki-NLP/opus-mt-fi-yo | 3994494fc8d78133a26c0aaf729b1b94de5e3fe4 | 2021-09-09T21:52:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"yo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-yo | 10 | null | transformers | 11,497 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-yo
* source languages: fi
* target languages: yo
* OPUS readme: [fi-yo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-yo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-yo/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-yo/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-yo/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.yo | 25.8 | 0.427 |
|
Helsinki-NLP/opus-mt-fr-efi | 7cd45da8a4651cd9284f10253268f1302aedbc06 | 2021-09-09T21:53:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"efi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-efi | 10 | null | transformers | 11,498 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-efi
* source languages: fr
* target languages: efi
* OPUS readme: [fr-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-efi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-efi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-efi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-efi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.efi | 26.9 | 0.462 |
|
Helsinki-NLP/opus-mt-fr-gaa | 54ca216dc6dde0faef161b60b532eeb5c32aba91 | 2021-09-09T21:53:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"gaa",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-gaa | 10 | null | transformers | 11,499 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-gaa
* source languages: fr
* target languages: gaa
* OPUS readme: [fr-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-gaa/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-gaa/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gaa/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gaa/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.gaa | 27.8 | 0.473 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.