modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
richardc7/electricidad-small-finetuned-amazon-review-classification | 31f9a7d87367091f5fd0a911eda74ca99f0268ed | 2022-03-19T15:29:47.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | richardc7 | null | richardc7/electricidad-small-finetuned-amazon-review-classification | 2 | null | transformers | 25,200 | ---
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: electricidad-small-finetuned-amazon-review-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.581
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-finetuned-amazon-review-classification
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9601
- Accuracy: 0.581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0136 | 1.0 | 25000 | 1.0153 | 0.5414 |
| 0.9416 | 2.0 | 50000 | 0.9942 | 0.5576 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Rustem/roberta-large-copy | 938e065994b9031662aeb753d58d7c1c928ba4a6 | 2022-03-17T13:41:37.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Rustem | null | Rustem/roberta-large-copy | 2 | null | transformers | 25,201 | Entry not found |
transZ/BART_shared_clean | ae246f81b65c0edf62353894e2f6a768ddadcd6f | 2022-04-15T14:30:26.000Z | [
"pytorch",
"shared_bart",
"transformers"
] | null | false | transZ | null | transZ/BART_shared_clean | 2 | null | transformers | 25,202 | Entry not found |
sanchit-gandhi/wav2vec2-2-gpt2-regularisation | 546a77bf239b5cf3c2c4bb6e0e9fe96cbdf885ec | 2022-03-19T17:11:48.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-gpt2-regularisation | 2 | null | transformers | 25,203 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8529
- Wer: 0.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5506 | 2.8 | 2500 | 4.4928 | 1.8772 |
| 0.5145 | 5.61 | 5000 | 1.8942 | 1.1063 |
| 0.2736 | 8.41 | 7500 | 1.6550 | 1.0372 |
| 0.0807 | 11.21 | 10000 | 1.7601 | 1.0004 |
| 0.0439 | 14.01 | 12500 | 1.8014 | 1.0022 |
| 0.043 | 16.82 | 15000 | 1.8534 | 1.0097 |
| 0.0434 | 19.62 | 17500 | 1.8529 | 0.9977 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Ameer05/cloned-bart-large-cnn | 07961554f64aa7de504de59ae7da7aea201f97ac | 2022-03-17T17:40:48.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ameer05 | null | Ameer05/cloned-bart-large-cnn | 2 | null | transformers | 25,204 | Entry not found |
internetoftim/bert-large-uncased-squad | d767b5fd7c4dcc4dc500fc1d60cfafbdad4fa699 | 2022-04-01T18:11:31.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | internetoftim | null | internetoftim/bert-large-uncased-squad | 2 | null | transformers | 25,205 | Entry not found |
cammy/PRIMERA-100-MDS | 7c5e49c23e5b85204b97100266894ddf522db529 | 2022-03-17T18:41:49.000Z | [
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/PRIMERA-100-MDS | 2 | null | transformers | 25,206 | Entry not found |
saghar/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large-finetuned-wikitext103 | 6d32f756418091813e32cb6777be209c3fd12d96 | 2022-03-18T02:24:28.000Z | [
"pytorch",
"roberta",
"fill-mask",
"dataset:wikitext",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | saghar | null | saghar/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large-finetuned-wikitext103 | 2 | null | transformers | 25,207 | ---
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: MiniLMv2-L6-H384-distilled-from-RoBERTa-Large-finetuned-wikitext103
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H384-distilled-from-RoBERTa-Large-finetuned-wikitext103
This model is a fine-tuned version of [nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.9694 | 1.0 | 3125 | 5.1757 |
| 5.2228 | 2.0 | 6250 | 4.8847 |
| 5.0653 | 3.0 | 9375 | 4.8236 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
BigSalmon/InformalToFormalLincoln27 | 5f1765ce8553bd45e1a7eadc5efe0281f04e9442 | 2022-03-18T02:40:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln27 | 2 | null | transformers | 25,208 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln27")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln27")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
``` |
cammy/PRIMERA-100-MDS-own | 162efd4d14155b6e0da61a8ecd331eee46917ea9 | 2022-03-18T08:17:47.000Z | [
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/PRIMERA-100-MDS-own | 2 | null | transformers | 25,209 | Entry not found |
moshew/paraphrase-mpnet-base-v2_SetFit_sst2 | cc9ecc9f22dae10ea5be134b5f70e76368f09e3f | 2022-03-18T07:53:15.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | moshew | null | moshew/paraphrase-mpnet-base-v2_SetFit_sst2 | 2 | 1 | sentence-transformers | 25,210 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# moshew/paraphrase-mpnet-base-v2_SetFit_sst2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('moshew/paraphrase-mpnet-base-v2_SetFit_sst2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_sst2')
model = AutoModel.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_sst2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=moshew/paraphrase-mpnet-base-v2_SetFit_sst2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8650 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
cammy/led-large-16384-arxiv-100-MDS-own | 91a1400f4041836469de08d9a746535d0c113f46 | 2022-03-18T08:29:48.000Z | [
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/led-large-16384-arxiv-100-MDS-own | 2 | null | transformers | 25,211 | Entry not found |
eliasws/openApiT5-to-description-v1 | 4a9d402601de9e11d71c561405104f8ec9d8a93e | 2022-03-18T10:08:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eliasws | null | eliasws/openApiT5-to-description-v1 | 2 | null | transformers | 25,212 | Entry not found |
eliasws/openApiT5-to-description-v2 | 43218f0708bb0834644175bf4fcc685787c209a6 | 2022-03-18T16:25:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eliasws | null | eliasws/openApiT5-to-description-v2 | 2 | null | transformers | 25,213 | Entry not found |
IsaacSST/gpt2-xl-ft-d1 | 072598abc8c08aa6f2fa9c2743df7489394fd83f | 2022-03-18T15:50:00.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | IsaacSST | null | IsaacSST/gpt2-xl-ft-d1 | 2 | null | transformers | 25,214 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-d1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-d1
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 156 | 1.2130 |
| No log | 2.0 | 312 | 1.2113 |
| No log | 3.0 | 468 | 1.2585 |
| 1.2059 | 4.0 | 624 | 1.2993 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
nherve/flaubert-oral-asr_nb | 9edcea8ce3c63891ceafd38fe6df743aeba2818f | 2022-04-04T10:27:01.000Z | [
"pytorch",
"flaubert",
"fr",
"transformers",
"bert",
"language-model",
"french",
"flaubert-base",
"uncased",
"asr",
"speech",
"oral",
"natural language understanding",
"NLU",
"spoken language understanding",
"SLU",
"understanding",
"license:mit"
] | null | false | nherve | null | nherve/flaubert-oral-asr_nb | 2 | null | transformers | 25,215 | ---
language: fr
license: mit
tags:
- bert
- language-model
- flaubert
- french
- flaubert-base
- uncased
- asr
- speech
- oral
- natural language understanding
- NLU
- spoken language understanding
- SLU
- understanding
---
# FlauBERT-Oral models: Using ASR-Generated Text for Spoken Language Modeling
**FlauBERT-Oral** are French BERT models trained on a very large amount of automatically transcribed speech from 350,000 hours of diverse French TV shows. They were trained with the [**FlauBERT software**](https://github.com/getalp/Flaubert) using the same parameters as the [flaubert-base-uncased](https://huggingface.co/flaubert/flaubert_base_uncased) model (12 layers, 12 attention heads, 768 dims, 137M parameters, uncased).
## Available FlauBERT-Oral models
- `flaubert-oral-asr` : trained from scratch on ASR data, keeping the BPE tokenizer and vocabulary of flaubert-base-uncased
- `flaubert-oral-asr_nb` : trained from scratch on ASR data, BPE tokenizer is also trained on the same corpus
- `flaubert-oral-mixed` : trained from scratch on a mixed corpus of ASR and text data, BPE tokenizer is also trained on the same corpus
- `flaubert-oral-ft` : fine-tuning of flaubert-base-uncased for a few epochs on ASR data
## Usage for sequence classification
```python
flaubert_tokenizer = FlaubertTokenizer.from_pretrained("nherve/flaubert-oral-asr")
flaubert_classif = FlaubertForSequenceClassification.from_pretrained("nherve/flaubert-oral-asr", num_labels=14)
flaubert_classif.sequence_summary.summary_type = 'mean'
# Then, train your model
```
## References
If you use FlauBERT-Oral models for your scientific publication, or if you find the resources in this repository useful, please cite the following papers:
```
@InProceedings{herve2022flaubertoral,
author = {Herv\'{e}, Nicolas and Pelloin, Valentin and Favre, Benoit and Dary, Franck and Laurent, Antoine and Meignier, Sylvain and Besacier, Laurent},
title = {Using ASR-Generated Text for Spoken Language Modeling},
booktitle = {Proceedings of "Challenges & Perspectives in Creating Large Language Models" ACL 2022 Workshop},
month = {May},
year = {2022}
}
```
|
facebook/regnet-y-032 | 5f298694c4b010d7e67fafb6c743c0e81d41689c | 2022-06-28T11:39:30.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/regnet-y-032 | 2 | null | transformers | 25,216 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
emilygs2/distilroberta-base-finetuned-genderswap | e2674676033c32b3b53871fd56545a21a2d6e4a0 | 2022-03-18T16:35:12.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | emilygs2 | null | emilygs2/distilroberta-base-finetuned-genderswap | 2 | null | transformers | 25,217 | Entry not found |
nebo333/distilbert-base-uncased-finetuned-emotion | dd533d4523aa34fb828116ea9c4833059d685fae | 2022-03-18T22:34:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | nebo333 | null | nebo333/distilbert-base-uncased-finetuned-emotion | 2 | null | transformers | 25,218 | Entry not found |
Valouzze/MegaIA | 165070c2a7b6c49567d12bebfc0199d7ab3a3df4 | 2022-03-18T20:20:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Valouzze | null | Valouzze/MegaIA | 2 | null | transformers | 25,219 | ---
tags:
- conversational
---
# My Awesome Model
|
vinaykudari/t5-ft-billsum | 68a256d21dd70e7f8b52bbb6e581ea55c704f42b | 2022-03-18T23:11:57.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:billsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | vinaykudari | null | vinaykudari/t5-ft-billsum | 2 | null | transformers | 25,220 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
model-index:
- name: t5-ft-billsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-ft-billsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 99 | 2.6250 |
| No log | 2.0 | 198 | 2.4587 |
| No log | 3.0 | 297 | 2.3865 |
| No log | 4.0 | 396 | 2.3431 |
| No log | 5.0 | 495 | 2.3226 |
| 2.7775 | 6.0 | 594 | 2.3019 |
| 2.7775 | 7.0 | 693 | 2.2882 |
| 2.7775 | 8.0 | 792 | 2.2802 |
| 2.7775 | 9.0 | 891 | 2.2764 |
| 2.7775 | 10.0 | 990 | 2.2752 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Rustem/roberta-base-trained-43 | 8cd2c337801ace021867c124ffffd32a09f2ed7a | 2022-03-19T09:27:41.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Rustem | null | Rustem/roberta-base-trained-43 | 2 | null | transformers | 25,221 | Entry not found |
Ameer05/updated-bart-large-cnn | 676c44e32425967b47edd122abef5694cd085629 | 2022-03-19T12:48:25.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ameer05 | null | Ameer05/updated-bart-large-cnn | 2 | null | transformers | 25,222 | Entry not found |
selimsametoglu/selims | 695cb362e5008e8c0c926107a93f8dcb18331f7e | 2022-03-21T11:01:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | selimsametoglu | null | selimsametoglu/selims | 2 | null | transformers | 25,223 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- tweet_eval
model-index:
- name: selims
results: []
widget:
- text: "I love conducting research on twins!"
example_title: "Sentiment analysis - English"
- text: "Ja, ik vind het tweelingen onderzoek leuk maar complex, weet je."
example_title: "Sentiment analysis - Dutch"
---
# selims
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the tweet_eval dataset.
## Model description
This is a multilingual model for sentiment analysis that provides outputs ranging from 1 to 5, following the same logic as the 1 to 5-star reviews.
## Intended uses & limitations
This sentiment model can be applied to datasets in the following languages: English, Dutch, German, French, Spanish, and Italian.
## Training and evaluation data
For fine-tunning of this model, the Tweet_eval dataset was used.
## Training procedure
Please refer to the information below:
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cpu
- Datasets 2.0.0
- Tokenizers 0.10.3
|
Makinitas/DialoGPT-small-RickAndMortyScripts | f5feee1261ba1391d3d9b9c1962a79491115211c | 2022-03-19T17:47:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Makinitas | null | Makinitas/DialoGPT-small-RickAndMortyScripts | 2 | null | transformers | 25,224 | ---
tags:
- conversational
---
# Rick And Morty DialoGPT Model
|
axiomepic/hub_model_id | ade524b309ed0f903c3640ba4159bbd3997d6234 | 2022-03-19T21:21:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | axiomepic | null | axiomepic/hub_model_id | 2 | null | transformers | 25,225 | Entry not found |
apkbala107/tamilberta | c34ab91a9a7273210be78cc992826d3bba9eba18 | 2022-03-19T18:17:06.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"license:cc",
"autotrain_compatible"
] | fill-mask | false | apkbala107 | null | apkbala107/tamilberta | 2 | null | transformers | 25,226 | ---
license: cc
---
|
KheireddineDaouadi/AraRobertaAut | 3eb0d09e838b14d9eb10c0a6518c892849795935 | 2022-03-20T20:31:23.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KheireddineDaouadi | null | KheireddineDaouadi/AraRobertaAut | 2 | null | transformers | 25,227 | Entry not found |
duanxingjuan/DialoGPT-medium-DEMON_SLAYER | 8317ec32e343073f2d862b805c3ec085017720cb | 2022-03-20T11:49:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | duanxingjuan | null | duanxingjuan/DialoGPT-medium-DEMON_SLAYER | 2 | null | transformers | 25,228 | ---
tags:
- conversational
---
# DEMON_SLAYER DialoGPT Model |
tbosse/distilbert-base-uncased-finetuned-pos | 99836fc71c7bacf6f083bc612a8d7ed11e0c7aa7 | 2022-03-25T00:02:11.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | tbosse | null | tbosse/distilbert-base-uncased-finetuned-pos | 2 | null | transformers | 25,229 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-pos
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9109037731744458
- name: Recall
type: recall
value: 0.9143515710299648
- name: F1
type: f1
value: 0.9126244157605404
- name: Accuracy
type: accuracy
value: 0.9245555785025498
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-pos
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3165
- Precision: 0.9109
- Recall: 0.9144
- F1: 0.9126
- Accuracy: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7941 | 1.0 | 878 | 0.3504 | 0.8995 | 0.9026 | 0.9011 | 0.9176 |
| 0.2533 | 2.0 | 1756 | 0.3216 | 0.9091 | 0.9104 | 0.9098 | 0.9233 |
| 0.2047 | 3.0 | 2634 | 0.3165 | 0.9109 | 0.9144 | 0.9126 | 0.9246 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
jcai1/similarity6 | 292c4f6e6a91307f893995437265628afd6c8c13 | 2022-03-20T21:38:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jcai1 | null | jcai1/similarity6 | 2 | null | transformers | 25,230 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: similarity6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# similarity6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 393 | 0.2287 | 0.9341 | 0.9112 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
beston91/gpt2-xl_ft_logits_5k_2 | 80f0cf57115b5ce6a02a1f19a48e3601f5e31cd7 | 2022-03-21T10:16:30.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | beston91 | null | beston91/gpt2-xl_ft_logits_5k_2 | 2 | null | transformers | 25,231 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_logits_5k_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_logits_5k_2
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.2407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 27 | 6.1106 |
| No log | 1.99 | 54 | 6.1400 |
| No log | 2.99 | 81 | 6.1875 |
| No log | 3.99 | 108 | 6.2407 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.59415626525879 |
IsaacSST/gpt2-xl-ft-d4-0.3 | 75dd51de33e70152b47608e1f4ab87a300c092c0 | 2022-03-21T04:24:22.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | IsaacSST | null | IsaacSST/gpt2-xl-ft-d4-0.3 | 2 | null | transformers | 25,232 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-d4-0.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-d4-0.3
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 156 | 1.2334 |
| No log | 2.0 | 312 | 1.2392 |
| No log | 3.0 | 468 | 1.2944 |
| 1.1868 | 4.0 | 624 | 1.3401 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
QWEasd1122/distilbert-base-uncased-finetuned-squad | 26e9b0c7739bd6e1e46f33b6c3098222f03078d7 | 2022-03-22T03:43:44.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | QWEasd1122 | null | QWEasd1122/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 25,233 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 52 | 3.5584 |
| No log | 2.0 | 104 | 3.3937 |
| No log | 3.0 | 156 | 3.3665 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/InformalToFormalLincoln28 | d71c5f2eaa8c945f89098d4093156c80ce69b612 | 2022-03-21T03:14:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln28 | 2 | null | transformers | 25,234 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln28")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln28")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
``` |
PSW/ut-del-two-at-once-ver1 | 5728a481e3a170e49f76b3ecd1882b1525230ff9 | 2022-03-21T05:02:36.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut-del-two-at-once-ver1 | 2 | null | transformers | 25,235 | Entry not found |
bdotloh/twitter-roberta-base-finetuned-twitter-user-desc | b7c82c330590cbea2328e1a026e08f273080f559 | 2022-03-25T04:12:19.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | bdotloh | null | bdotloh/twitter-roberta-base-finetuned-twitter-user-desc | 2 | null | transformers | 25,236 | ---
tags:
- generated_from_trainer
model-index:
- name: twitter-roberta-base-finetuned-twitter-user-desc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-finetuned-twitter-user-desc
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on a dataset of twitter user descriptions.
It achieves the following results on the evaluation set:
- eval_perplexity: 2.33
- epoch: 15
- step: 10635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
tau/t5_lm_1024_0.3_epoch1_v2 | 4ed20743bc6199c5e77ddf507036acbbe522720a | 2022-03-21T08:09:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/t5_lm_1024_0.3_epoch1_v2 | 2 | null | transformers | 25,237 | Entry not found |
Dahn/wav2vec2-base-timit-demo-colab | dd16c03c92107bff83d4880536669cf2c143b647 | 2022-03-21T13:04:57.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Dahn | null | Dahn/wav2vec2-base-timit-demo-colab | 2 | null | transformers | 25,238 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4796
- Wer: 0.3434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4323 | 4.0 | 500 | 1.3259 | 0.9859 |
| 0.5966 | 8.0 | 1000 | 0.4682 | 0.4442 |
| 0.2187 | 12.0 | 1500 | 0.4490 | 0.3875 |
| 0.1274 | 16.0 | 2000 | 0.4595 | 0.3727 |
| 0.0859 | 20.0 | 2500 | 0.4819 | 0.3683 |
| 0.0602 | 24.0 | 3000 | 0.4524 | 0.3514 |
| 0.0449 | 28.0 | 3500 | 0.4796 | 0.3434 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Yaxin/xlm-roberta-base-amzaon-reviews-mlm | 1c5c3e7cde05feabf51f472df873018a895d148a | 2022-03-21T15:51:49.000Z | [
"pytorch",
"dataset:amazon_reviews_multi",
"generated_from_trainer",
"license:mit",
"model-index"
] | null | false | Yaxin | null | Yaxin/xlm-roberta-base-amzaon-reviews-mlm | 2 | null | null | 25,239 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: test-xlm-roberta-base-amzaon-reviews-mlm
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: amazon_reviews_multi all_languages
type: amazon_reviews_multi
args: all_languages
metrics:
- name: Accuracy
type: accuracy
value: 0.5032103794889962
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-xlm-roberta-base-amzaon-reviews-mlm
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi all_languages dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1091
- Accuracy: 0.5032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0 |
Ameer05/model-tokenizer-repo | 68d7abc9bb52821c1e23fd69598c9c701b0b1d2b | 2022-03-21T16:45:24.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ameer05 | null | Ameer05/model-tokenizer-repo | 2 | null | transformers | 25,240 | Entry not found |
elena-soare/docu-t5-base-FK | a7b1263ec21bddadd451446f7ebb880a8a4ba2eb | 2022-04-04T14:34:49.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | elena-soare | null | elena-soare/docu-t5-base-FK | 2 | null | transformers | 25,241 | # Text2SQL Task T5-Base + Foreign Keys
This is our T5 model fine-tuned on Spider using a schema serialization which includes foreign keys
## Running the model
Inspired by the work done by [Picard](https://github.com/ElementAI/picard/) by adding foreign keys relations.
|
elena-soare/bat-pre-trained | 521e3ea5c79e8e10e60e3581cb7655d159067286 | 2022-03-21T22:23:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | elena-soare | null | elena-soare/bat-pre-trained | 2 | null | transformers | 25,242 | # Text2SQL Task T5-Base + E-commerce pre-training
This is our T5 model pre-trained on 18k e-commerce pages from popular blogs and fine-tuned on Spider using a schema serialization.
## Running the model
Inspired by the work done by [Picard](https://github.com/ElementAI/picard/) by adding a pre-training step for better performance on e-commerce data.
```python
[question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ...
```
|
Bistolero/mt5_two_epocs_nl | cd79e50a48177a82454ebbaaff96c1af962ed9d3 | 2022-03-21T22:58:52.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/mt5_two_epocs_nl | 2 | null | transformers | 25,243 | Entry not found |
danyaljj/gpt-j-6B-step-378500 | 4c6169b3a50f7bd10e224f79dcf6638dc8a10af2 | 2022-03-22T23:09:46.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt-j-6B-step-378500 | 2 | null | transformers | 25,244 | Entry not found |
danyaljj/gpt-j-6B-step-383000 | 2078e2329ee0f4896b39fe2256daade197474da2 | 2022-03-22T23:10:04.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt-j-6B-step-383000 | 2 | null | transformers | 25,245 | Entry not found |
Bistolero/mix_training_en_du_nl | e6e81ddcf1bf3946b3a4f0c7562f0f33a59cd30d | 2022-03-22T01:48:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/mix_training_en_du_nl | 2 | null | transformers | 25,246 | Entry not found |
asahi417/tner-roberta-large-tweet-st-2020 | 207a05c2c79a3e8e209e5457025ab15d21275e22 | 2022-04-28T12:40:51.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | asahi417 | null | asahi417/tner-roberta-large-tweet-st-2020 | 2 | null | transformers | 25,247 | Entry not found |
Taekyoon/unicon_v0.5.3_alpha | 802f82bc7fcbd3ac364dbde25c69ed53230d1fa0 | 2022-03-22T04:06:33.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Taekyoon | null | Taekyoon/unicon_v0.5.3_alpha | 2 | null | transformers | 25,248 | Entry not found |
aaraki/wav2vec2-base-demo-colab | 708f9b8e9b2211cdabae550ead31f4459f95afcb | 2022-03-22T07:43:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | aaraki | null | aaraki/wav2vec2-base-demo-colab | 2 | null | transformers | 25,249 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Doogie/Wayne_Mulang_mT5 | a1f446c9094e493a23808b236402f71eb4e86ae3 | 2022-04-19T05:38:52.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Doogie | null | Doogie/Wayne_Mulang_mT5 | 2 | null | transformers | 25,250 | Entry not found |
PSW/ut_del_two_at_once_ver4 | 1a09fad15e0bd56d655ec0aff6ed8ae55f36c4d8 | 2022-03-22T06:53:07.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_two_at_once_ver4 | 2 | null | transformers | 25,251 | Entry not found |
PSW/ut_del_two_at_once_ver5 | f8c5d2bbefa6e219297cabc13a802e0a5aacadc8 | 2022-03-22T08:14:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_two_at_once_ver5 | 2 | null | transformers | 25,252 | Entry not found |
hawoihgawjlj/STS-Team3 | a4daa709fc11a6f1f3b9a2fc52ffff42e8b54346 | 2022-03-22T09:27:17.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | hawoihgawjlj | null | hawoihgawjlj/STS-Team3 | 2 | null | transformers | 25,253 | Entry not found |
caiosantillo/distilbert-base-uncased-finetuned-squad | bf7fc3c2232683a3c4896ef31378117841c1c482 | 2022-05-10T15:05:30.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | caiosantillo | null | caiosantillo/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 25,254 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2125 | 1.0 | 5533 | 1.1521 |
| 0.9496 | 2.0 | 11066 | 1.1227 |
| 0.7499 | 3.0 | 16599 | 1.1551 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
edwardjross/xlm-roberta-base-finetuned-panx-de | 887e4258dbd07dd4636441bc7d5f91b3e6dc099a | 2022-03-22T13:06:25.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | edwardjross | null | edwardjross/xlm-roberta-base-finetuned-panx-de | 2 | null | transformers | 25,255 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8644809364168419
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1360
- F1: 0.8645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2528 | 1.0 | 787 | 0.1657 | 0.8244 |
| 0.1298 | 2.0 | 1574 | 0.1369 | 0.8555 |
| 0.0787 | 3.0 | 2361 | 0.1360 | 0.8645 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
edwardjross/xlm-roberta-base-finetuned-panx-de-fr | f79dafd5aadd7399568417643ddfc08f10a8afa3 | 2022-03-22T13:22:21.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | edwardjross | null | edwardjross/xlm-roberta-base-finetuned-panx-de-fr | 2 | null | transformers | 25,256 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1686
- F1: 0.8606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2819 | 1.0 | 1073 | 0.1800 | 0.8231 |
| 0.1484 | 2.0 | 2146 | 0.1655 | 0.8488 |
| 0.0928 | 3.0 | 3219 | 0.1686 | 0.8606 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
edwardjross/xlm-roberta-base-finetuned-panx-en | a4bf7d692123c74d11752e015a09ef062083e817 | 2022-03-22T13:33:38.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | edwardjross | null | edwardjross/xlm-roberta-base-finetuned-panx-en | 2 | null | transformers | 25,257 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6918378678511938
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3792
- F1: 0.6918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0639 | 1.0 | 74 | 0.5075 | 0.5539 |
| 0.491 | 2.0 | 148 | 0.4118 | 0.6510 |
| 0.355 | 3.0 | 222 | 0.3792 | 0.6918 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
edwardjross/xlm-roberta-base-finetuned-panx-all | 4842454eb020e671cd07375186c4f392fe79a30e | 2022-03-22T13:46:27.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | edwardjross | null | edwardjross/xlm-roberta-base-finetuned-panx-all | 2 | null | transformers | 25,258 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1812
- F1: 0.8567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2983 | 1.0 | 1252 | 0.1945 | 0.8033 |
| 0.1603 | 2.0 | 2504 | 0.1889 | 0.8441 |
| 0.1012 | 3.0 | 3756 | 0.1812 | 0.8567 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Splend1dchan/t5lephone-small | 0a248f7bfc9dc2a58bd3602c0334ba7d36cb544c | 2022-04-06T12:38:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Splend1dchan | null | Splend1dchan/t5lephone-small | 2 | null | transformers | 25,259 | Entry not found |
jaketae/fastspeech2-commonvoice | 036c65a958e74983e6babc742f80f7a99f3e5cc4 | 2022-04-16T07:26:52.000Z | [
"pytorch",
"fastspeech2",
"transformers"
] | null | false | jaketae | null | jaketae/fastspeech2-commonvoice | 2 | null | transformers | 25,260 | Entry not found |
duanxingjuan/DialoGPT-large-DEMON1 | 166ac34b0ea08ff3c131bbe33f8ba21a62fae7c9 | 2022-03-23T01:04:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | duanxingjuan | null | duanxingjuan/DialoGPT-large-DEMON1 | 2 | null | transformers | 25,261 | ---
tags:
- conversational
---
# DEMON_SLAYER DialoGPT Model v5 |
PSW/ut_del_three_per_each_ver2 | 605c65cd7668147930404ed461ae978c4690a537 | 2022-03-23T02:12:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_three_per_each_ver2 | 2 | null | transformers | 25,262 | Entry not found |
Pavithra/codeparrot-ds-sample | bbc16b2274d187595a40bf4a9519de7dd11c76f9 | 2022-03-24T06:41:47.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | Pavithra | null | Pavithra/codeparrot-ds-sample | 2 | null | transformers | 25,263 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.5219
- eval_runtime: 603.3856
- eval_samples_per_second: 154.402
- eval_steps_per_second: 4.826
- epoch: 0.15
- step: 10000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
pinot/wav2vec2-base-timit-demo-colab | 0882359faaaabae96e0cda5d2d362b20120a2319 | 2022-05-12T14:37:53.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | pinot | null | pinot/wav2vec2-base-timit-demo-colab | 2 | null | transformers | 25,264 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4548
- Wer: 0.3373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3291 | 4.0 | 500 | 1.0403 | 0.7174 |
| 0.5336 | 8.0 | 1000 | 0.4744 | 0.4489 |
| 0.2155 | 12.0 | 1500 | 0.4476 | 0.3832 |
| 0.1256 | 16.0 | 2000 | 0.4358 | 0.3639 |
| 0.0867 | 20.0 | 2500 | 0.4634 | 0.3527 |
| 0.0608 | 24.0 | 3000 | 0.4784 | 0.3466 |
| 0.0476 | 28.0 | 3500 | 0.4548 | 0.3373 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sumedh/autonlp-MeQSum-1-660519466 | 254148a57d182c03e6df2437436940b66295657c | 2022-03-23T07:16:44.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"unk",
"dataset:sumedh/autotrain-data-MeQSum-1",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | sumedh | null | sumedh/autonlp-MeQSum-1-660519466 | 2 | null | transformers | 25,265 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- sumedh/autotrain-data-MeQSum-1
co2_eq_emissions: 35.865521343923916
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 660519466
- CO2 Emissions (in grams): 35.865521343923916
## Validation Metrics
- Loss: 1.3210543394088745
- Rouge1: 52.1593
- Rouge2: 34.5464
- RougeL: 50.1141
- RougeLsum: 50.1067
- Gen Len: 11.93
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/sumedh/autonlp-MeQSum-1-660519466
``` |
hawoihgawjlj/Task3-STS-Team3 | a9e668d0cdc1840c0b906486679a6c9cc43101b2 | 2022-03-23T17:19:25.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | hawoihgawjlj | null | hawoihgawjlj/Task3-STS-Team3 | 2 | null | transformers | 25,266 | Entry not found |
Helsinki-NLP/opus-mt-tc-base-uk-ces_slk | a63ec1610c25c67b8b5f78bf5a3b3bab02db2186 | 2022-06-01T13:08:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"cs",
"sk",
"uk",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-uk-ces_slk | 2 | null | transformers | 25,267 | ---
language:
- cs
- sk
- uk
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-base-uk-ces_slk
results:
- task:
name: Translation ukr-ces
type: translation
args: ukr-ces
dataset:
name: flores101-devtest
type: flores_101
args: ukr ces devtest
metrics:
- name: BLEU
type: bleu
value: 23.0
- task:
name: Translation ukr-slk
type: translation
args: ukr-slk
dataset:
name: flores101-devtest
type: flores_101
args: ukr slk devtest
metrics:
- name: BLEU
type: bleu
value: 22.1
- task:
name: Translation ukr-ces
type: translation
args: ukr-ces
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-ces
metrics:
- name: BLEU
type: bleu
value: 54.2
---
# opus-mt-tc-base-uk-ces_slk
Neural machine translation model for translating from Ukrainian (uk) to Czech and Slovak (cs+sk).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-17
* source language(s): ukr
* target language(s): ces
* valid target language labels: >>ces<< >>slk<<
* model: transformer-align
* data: opusTCv20210807+pft ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pft_transformer-align_2022-03-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces+slk/opusTCv20210807+pft_transformer-align_2022-03-17.zip)
* more information released models: [OPUS-MT ukr-ces+slk README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ces+slk/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>ces<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>ces<< А чого так?",
">>ces<< Я загубив свої окуляри."
]
model_name = "pytorch-models/opus-mt-tc-base-uk-ces_slk"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Proč to tak je?
# Ztratil jsem brýle.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-uk-ces_slk")
print(pipe(">>ces<< А чого так?"))
# expected output: Proč to tak je?
```
## Benchmarks
* test set translations: [opusTCv20210807+pft_transformer-align_2022-03-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces+slk/opusTCv20210807+pft_transformer-align_2022-03-17.test.txt)
* test set scores: [opusTCv20210807+pft_transformer-align_2022-03-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces+slk/opusTCv20210807+pft_transformer-align_2022-03-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ukr-ces | tatoeba-test-v2021-08-07 | 0.70661 | 54.2 | 1787 | 8550 |
| ukr-ces | flores101-devtest | 0.51283 | 23.0 | 1012 | 22101 |
| ukr-slk | flores101-devtest | 0.51043 | 22.1 | 1012 | 22543 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: f084bad
* port time: Wed Mar 23 21:54:02 EET 2022
* port machine: LM0-400-22516.local
|
ScandinavianMrT/gpt2_ONION_prefinetune_4.0 | 36182d951d0bdfa31db573f49fa717454e1eef12 | 2022-03-23T18:39:51.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | ScandinavianMrT | null | ScandinavianMrT/gpt2_ONION_prefinetune_4.0 | 2 | null | transformers | 25,268 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2_ONION_prefinetune_4.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_ONION_prefinetune_4.0
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 153 | 4.7368 |
| No log | 2.0 | 306 | 4.6732 |
| No log | 3.0 | 459 | 4.6527 |
| 4.8529 | 4.0 | 612 | 4.6484 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/InformalToFormalLincoln30 | 9c7fb9f675284f9fa76bcfb9c9939fddac42762f | 2022-03-23T20:51:13.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln30 | 2 | null | transformers | 25,269 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln30")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln30")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
``` |
huggingtweets/coscorrodrift | 8562e811a34dcce35db2dbedd8d3e90ba2adad95 | 2022-03-23T22:21:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/coscorrodrift | 2 | null | transformers | 25,270 | ---
language: en
thumbnail: http://www.huggingtweets.com/coscorrodrift/1648073956402/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1363260889164623877/vz-U9f3l_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">coscorrodrift</div>
<div style="text-align: center; font-size: 14px;">@coscorrodrift</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from coscorrodrift.
| Data | coscorrodrift |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 192 |
| Short tweets | 405 |
| Tweets kept | 2650 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3elna51z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @coscorrodrift's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2mof7q9s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2mof7q9s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/coscorrodrift')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Bistolero/french_all | 71b8c73ff81605d9189182ee0562f9feb4de4039 | 2022-03-23T23:49:39.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/french_all | 2 | null | transformers | 25,271 | Entry not found |
huggingtweets/btohtoh | daa196d7a03858603d19b14203b841d74195b120 | 2022-03-24T01:35:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/btohtoh | 2 | null | transformers | 25,272 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506402743296020484/X79Yfcx5_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BToh</div>
<div style="text-align: center; font-size: 14px;">@btohtoh</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BToh.
| Data | BToh |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 347 |
| Short tweets | 480 |
| Tweets kept | 2414 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xnk5832/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @btohtoh's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2gdcu3k6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2gdcu3k6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/btohtoh')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rurupang/roberta-base-finetuned-sts-f1 | eec33717b9de782f68d55578039320650604ba2f | 2022-03-24T07:40:19.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | rurupang | null | rurupang/roberta-base-finetuned-sts-f1 | 2 | null | transformers | 25,273 | Entry not found |
MrBananaHuman/kobart-base-v2-summarization | ec357bc99e98b3802508ed63bd69641beb10e092 | 2022-03-24T04:25:30.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | MrBananaHuman | null | MrBananaHuman/kobart-base-v2-summarization | 2 | null | transformers | 25,274 | ---
license: apache-2.0
---
|
yy642/bert-base-uncased-finetuned-rte-max-length-512-epoch-10 | ac338b0e30d0163a3f8c91ee0fef7789715737e0 | 2022-03-24T05:45:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-rte-max-length-512-epoch-10 | 2 | null | transformers | 25,275 | Entry not found |
neal49/distilbert-sst2-withmlm-5e-1 | c2a5967c8dc9a2a7ff51c815755d9735d6d0a95c | 2022-03-24T07:16:49.000Z | [
"pytorch",
"distilbert",
"transformers"
] | null | false | neal49 | null | neal49/distilbert-sst2-withmlm-5e-1 | 2 | null | transformers | 25,276 | Entry not found |
buvnswrn/daml-t5 | 143242388ee221a6b3980f8857953b0286fbce98 | 2022-04-11T09:26:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | buvnswrn | null | buvnswrn/daml-t5 | 2 | null | transformers | 25,277 | Entry not found |
zuppif/resnet-d-18 | 90cdda6f7fdf8e25088c012521b3b479e7ac14b5 | 2022-03-24T08:57:16.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnet-d-18 | 2 | null | transformers | 25,278 | Entry not found |
zuppif/resnet-d-26 | a8021524619cba3d81cef29528f2891c03c1f7e1 | 2022-03-24T08:58:06.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnet-d-26 | 2 | null | transformers | 25,279 | Entry not found |
zuppif/resnet-d-50 | 89634f181783c0eee47e85579abe091d0dd91356 | 2022-03-24T09:00:13.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnet-d-50 | 2 | null | transformers | 25,280 | Entry not found |
zuppif/resnet-d-200 | a238f76f494b8a79aa9d025ab5838d3d5226ab75 | 2022-03-24T09:05:26.000Z | [
"pytorch",
"resnetd",
"transformers"
] | null | false | zuppif | null | zuppif/resnet-d-200 | 2 | null | transformers | 25,281 | Entry not found |
Helsinki-NLP/opus-mt-tc-big-fi-zle | f1e8639e5b7c09ca4d79627d3f7b58dc49de4bf2 | 2022-06-01T13:09:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-fi-zle | 2 | null | transformers | 25,282 | ---
language:
- fi
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-fi-zle
results:
- task:
name: Translation fin-rus
type: translation
args: fin-rus
dataset:
name: flores101-devtest
type: flores_101
args: fin rus devtest
metrics:
- name: BLEU
type: bleu
value: 21.4
- task:
name: Translation fin-ukr
type: translation
args: fin-ukr
dataset:
name: flores101-devtest
type: flores_101
args: fin ukr devtest
metrics:
- name: BLEU
type: bleu
value: 17.9
- task:
name: Translation fin-rus
type: translation
args: fin-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: fin-rus
metrics:
- name: BLEU
type: bleu
value: 47.0
---
# opus-mt-tc-big-fi-zle
Neural machine translation model for translating from Finnish (fi) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-17
* source language(s): fin
* target language(s): rus ukr
* valid target language labels: >>rus<< >>ukr<<
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zle/opusTCv20210807+bt_transformer-big_2022-03-17.zip)
* more information released models: [OPUS-MT fin-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-zle/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>rus<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>rus<< Äänestimme jo.",
">>ukr<< Yksi, kaksi, kolme, neljä, viisi, kuusi, seitsemän, kahdeksan, yhdeksän, kymmenen."
]
model_name = "pytorch-models/opus-mt-tc-big-fi-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Мы уже проголосовали.
# Один, два, три, чотири, п'ять, шість, сім, вісім, дев'ять, десять.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fi-zle")
print(pipe(">>rus<< Äänestimme jo."))
# expected output: Мы уже проголосовали.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zle/opusTCv20210807+bt_transformer-big_2022-03-17.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zle/opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| fin-rus | tatoeba-test-v2021-08-07 | 0.67247 | 47.0 | 3643 | 21497 |
| fin-rus | flores101-devtest | 0.49920 | 21.4 | 1012 | 23295 |
| fin-ukr | flores101-devtest | 0.46935 | 17.9 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 42126b6
* port time: Thu Mar 24 09:34:57 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-zle-gmq | af590a1620be76c154a85c2390bbe0bb9b31c2e9 | 2022-06-01T13:04:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tc",
"big",
"zle",
"gmq",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zle-gmq | 2 | null | transformers | 25,283 | ---
language:
- da
- gmq
- nb
- false
- ru
- sv
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zle-gmq
results:
- task:
name: Translation rus-dan
type: translation
args: rus-dan
dataset:
name: flores101-devtest
type: flores_101
args: rus dan devtest
metrics:
- name: BLEU
type: bleu
value: 28.0
- task:
name: Translation rus-nob
type: translation
args: rus-nob
dataset:
name: flores101-devtest
type: flores_101
args: rus nob devtest
metrics:
- name: BLEU
type: bleu
value: 20.6
- task:
name: Translation rus-swe
type: translation
args: rus-swe
dataset:
name: flores101-devtest
type: flores_101
args: rus swe devtest
metrics:
- name: BLEU
type: bleu
value: 26.4
- task:
name: Translation ukr-dan
type: translation
args: ukr-dan
dataset:
name: flores101-devtest
type: flores_101
args: ukr dan devtest
metrics:
- name: BLEU
type: bleu
value: 30.3
- task:
name: Translation ukr-nob
type: translation
args: ukr-nob
dataset:
name: flores101-devtest
type: flores_101
args: ukr nob devtest
metrics:
- name: BLEU
type: bleu
value: 21.1
- task:
name: Translation ukr-swe
type: translation
args: ukr-swe
dataset:
name: flores101-devtest
type: flores_101
args: ukr swe devtest
metrics:
- name: BLEU
type: bleu
value: 28.8
- task:
name: Translation rus-dan
type: translation
args: rus-dan
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-dan
metrics:
- name: BLEU
type: bleu
value: 59.6
- task:
name: Translation rus-nob
type: translation
args: rus-nob
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-nob
metrics:
- name: BLEU
type: bleu
value: 46.1
- task:
name: Translation rus-swe
type: translation
args: rus-swe
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-swe
metrics:
- name: BLEU
type: bleu
value: 53.3
---
# opus-mt-tc-big-zle-gmq
Neural machine translation model for translating from East Slavic languages (zle) to North Germanic languages (gmq).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-14
* source language(s): rus ukr
* target language(s): dan nob nor swe
* valid target language labels: >>dan<< >>nob<< >>nor<< >>swe<<
* model: transformer-big
* data: opusTCv20210807+pft ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pft_transformer-big_2022-03-14.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-gmq/opusTCv20210807+pft_transformer-big_2022-03-14.zip)
* more information released models: [OPUS-MT zle-gmq README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-gmq/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>dan<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>dan<< Заўтра ўжо чацвер.",
">>swe<< Том грав з Мері в кішки-мишки."
]
model_name = "pytorch-models/opus-mt-tc-big-zle-gmq"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# I morgen er det torsdag.
# Tom lekte med Mary i katt-möss.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-gmq")
print(pipe(">>dan<< Заўтра ўжо чацвер."))
# expected output: I morgen er det torsdag.
```
## Benchmarks
* test set translations: [opusTCv20210807+pft_transformer-big_2022-03-14.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-gmq/opusTCv20210807+pft_transformer-big_2022-03-14.test.txt)
* test set scores: [opusTCv20210807+pft_transformer-big_2022-03-14.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-gmq/opusTCv20210807+pft_transformer-big_2022-03-14.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| rus-dan | tatoeba-test-v2021-08-07 | 0.74307 | 59.6 | 1713 | 11746 |
| rus-nob | tatoeba-test-v2021-08-07 | 0.66376 | 46.1 | 1277 | 11672 |
| rus-swe | tatoeba-test-v2021-08-07 | 0.69608 | 53.3 | 1282 | 8449 |
| bel-dan | flores101-devtest | 0.47621 | 13.9 | 1012 | 24638 |
| bel-nob | flores101-devtest | 0.44966 | 10.8 | 1012 | 23873 |
| bel-swe | flores101-devtest | 0.47274 | 13.2 | 1012 | 23121 |
| rus-dan | flores101-devtest | 0.55917 | 28.0 | 1012 | 24638 |
| rus-nob | flores101-devtest | 0.50724 | 20.6 | 1012 | 23873 |
| rus-swe | flores101-devtest | 0.55812 | 26.4 | 1012 | 23121 |
| ukr-dan | flores101-devtest | 0.57829 | 30.3 | 1012 | 24638 |
| ukr-nob | flores101-devtest | 0.52271 | 21.1 | 1012 | 23873 |
| ukr-swe | flores101-devtest | 0.57499 | 28.8 | 1012 | 23121 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Wed Mar 23 23:13:54 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-zle-it | 7815686c1d81a39e4c58fe85b2bd74352f0599c4 | 2022-06-01T13:09:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"it",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zle-it | 2 | null | transformers | 25,284 | ---
language:
- be
- it
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zle-it
results:
- task:
name: Translation rus-ita
type: translation
args: rus-ita
dataset:
name: flores101-devtest
type: flores_101
args: rus ita devtest
metrics:
- name: BLEU
type: bleu
value: 23.7
- task:
name: Translation ukr-ita
type: translation
args: ukr-ita
dataset:
name: flores101-devtest
type: flores_101
args: ukr ita devtest
metrics:
- name: BLEU
type: bleu
value: 23.2
- task:
name: Translation bel-ita
type: translation
args: bel-ita
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bel-ita
metrics:
- name: BLEU
type: bleu
value: 49.3
- task:
name: Translation rus-ita
type: translation
args: rus-ita
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-ita
metrics:
- name: BLEU
type: bleu
value: 43.5
- task:
name: Translation ukr-ita
type: translation
args: ukr-ita
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-ita
metrics:
- name: BLEU
type: bleu
value: 50.0
---
# opus-mt-tc-big-zle-it
Neural machine translation model for translating from East Slavic languages (zle) to Italian (it).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-19
* source language(s): bel rus ukr
* target language(s): ita
* model: transformer-big
* data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807_transformer-big_2022-03-19.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-ita/opusTCv20210807_transformer-big_2022-03-19.zip)
* more information released models: [OPUS-MT zle-ita README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-ita/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Вони не ідіоти.",
"Я не хочу идти в банк."
]
model_name = "pytorch-models/opus-mt-tc-big-zle-it"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Non sono idioti.
# Non voglio andare in banca.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-it")
print(pipe("Вони не ідіоти."))
# expected output: Non sono idioti.
```
## Benchmarks
* test set translations: [opusTCv20210807_transformer-big_2022-03-19.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-ita/opusTCv20210807_transformer-big_2022-03-19.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-03-19.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-ita/opusTCv20210807_transformer-big_2022-03-19.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bel-ita | tatoeba-test-v2021-08-07 | 0.65945 | 49.3 | 264 | 1681 |
| rus-ita | tatoeba-test-v2021-08-07 | 0.64037 | 43.5 | 10045 | 71584 |
| ukr-ita | tatoeba-test-v2021-08-07 | 0.69570 | 50.0 | 5000 | 27846 |
| bel-ita | flores101-devtest | 0.46311 | 13.5 | 1012 | 27306 |
| rus-ita | flores101-devtest | 0.53054 | 23.7 | 1012 | 27306 |
| ukr-ita | flores101-devtest | 0.52783 | 23.2 | 1012 | 27306 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Wed Mar 23 23:17:47 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-zle-zle | bf1e5da40b907457791497ed2b4e50a2a4ce116f | 2022-06-01T13:07:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zle-zle | 2 | null | transformers | 25,285 | ---
language:
- be
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zle-zle
results:
- task:
name: Translation rus-ukr
type: translation
args: rus-ukr
dataset:
name: flores101-devtest
type: flores_101
args: rus ukr devtest
metrics:
- name: BLEU
type: bleu
value: 25.5
- task:
name: Translation ukr-rus
type: translation
args: ukr-rus
dataset:
name: flores101-devtest
type: flores_101
args: ukr rus devtest
metrics:
- name: BLEU
type: bleu
value: 28.3
- task:
name: Translation bel-rus
type: translation
args: bel-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bel-rus
metrics:
- name: BLEU
type: bleu
value: 68.6
- task:
name: Translation bel-ukr
type: translation
args: bel-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bel-ukr
metrics:
- name: BLEU
type: bleu
value: 65.5
- task:
name: Translation rus-bel
type: translation
args: rus-bel
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-bel
metrics:
- name: BLEU
type: bleu
value: 50.3
- task:
name: Translation rus-ukr
type: translation
args: rus-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-ukr
metrics:
- name: BLEU
type: bleu
value: 70.1
- task:
name: Translation ukr-bel
type: translation
args: ukr-bel
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-bel
metrics:
- name: BLEU
type: bleu
value: 58.9
- task:
name: Translation ukr-rus
type: translation
args: ukr-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-rus
metrics:
- name: BLEU
type: bleu
value: 75.7
---
# opus-mt-tc-big-zle-zle
Neural machine translation model for translating from East Slavic languages (zle) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-07
* source language(s): bel rus ukr
* target language(s): bel rus ukr
* valid target language labels: >>bel<< >>rus<< >>ukr<<
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opusTCv20210807+bt_transformer-big_2022-03-07.zip)
* more information released models: [OPUS-MT zle-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-zle/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>ukr<< Кот мёртвый.",
">>bel<< Джон живе в Нью-Йорку."
]
model_name = "pytorch-models/opus-mt-tc-big-zle-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Кіт мертвий.
# Джон жыве ў Нью-Йорку.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-zle")
print(pipe(">>ukr<< Кот мёртвый."))
# expected output: Кіт мертвий.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opusTCv20210807+bt_transformer-big_2022-03-07.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opusTCv20210807+bt_transformer-big_2022-03-07.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bel-rus | tatoeba-test-v2021-08-07 | 0.82526 | 68.6 | 2500 | 18895 |
| bel-ukr | tatoeba-test-v2021-08-07 | 0.81036 | 65.5 | 2355 | 15179 |
| rus-bel | tatoeba-test-v2021-08-07 | 0.66943 | 50.3 | 2500 | 18756 |
| rus-ukr | tatoeba-test-v2021-08-07 | 0.83639 | 70.1 | 10000 | 60212 |
| ukr-bel | tatoeba-test-v2021-08-07 | 0.75368 | 58.9 | 2355 | 15175 |
| ukr-rus | tatoeba-test-v2021-08-07 | 0.86806 | 75.7 | 10000 | 60387 |
| bel-rus | flores101-devtest | 0.47960 | 14.5 | 1012 | 23295 |
| bel-ukr | flores101-devtest | 0.47335 | 12.8 | 1012 | 22810 |
| rus-ukr | flores101-devtest | 0.55287 | 25.5 | 1012 | 22810 |
| ukr-rus | flores101-devtest | 0.56224 | 28.3 | 1012 | 23295 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 00:15:39 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-base-bat-zle | ef9424b4987c9ac6afa0f74ad0d711623662362b | 2022-06-01T13:09:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bat",
"lt",
"lv",
"ru",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-bat-zle | 2 | null | transformers | 25,286 | ---
language:
- bat
- lt
- lv
- ru
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-base-bat-zle
results:
- task:
name: Translation lav-rus
type: translation
args: lav-rus
dataset:
name: flores101-devtest
type: flores_101
args: lav rus devtest
metrics:
- name: BLEU
type: bleu
value: 21.1
- task:
name: Translation lit-rus
type: translation
args: lit-rus
dataset:
name: flores101-devtest
type: flores_101
args: lit rus devtest
metrics:
- name: BLEU
type: bleu
value: 21.3
- task:
name: Translation lav-rus
type: translation
args: lav-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: lav-rus
metrics:
- name: BLEU
type: bleu
value: 60.5
- task:
name: Translation lit-rus
type: translation
args: lit-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: lit-rus
metrics:
- name: BLEU
type: bleu
value: 54.9
---
# opus-mt-tc-base-bat-zle
Neural machine translation model for translating from Baltic languages (bat) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): lav lit
* target language(s): rus
* model: transformer-align
* data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807_transformer-align_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-zle/opusTCv20210807_transformer-align_2022-03-13.zip)
* more information released models: [OPUS-MT bat-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bat-zle/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>rus<< Āfrika ir cilvēces šūpulis.",
">>ukr<< Tomas yra mūsų kapitonas."
]
model_name = "pytorch-models/opus-mt-tc-base-bat-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Африка - это колыбель человечества.
# Томас - наш капітан.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-bat-zle")
print(pipe(">>rus<< Āfrika ir cilvēces šūpulis."))
# expected output: Африка - это колыбель человечества.
```
## Benchmarks
* test set translations: [opusTCv20210807_transformer-align_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-zle/opusTCv20210807_transformer-align_2022-03-13.test.txt)
* test set scores: [opusTCv20210807_transformer-align_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-zle/opusTCv20210807_transformer-align_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| lav-rus | tatoeba-test-v2021-08-07 | 0.75918 | 60.5 | 274 | 1541 |
| lit-rus | tatoeba-test-v2021-08-07 | 0.72796 | 54.9 | 3598 | 21908 |
| lav-rus | flores101-devtest | 0.49210 | 21.1 | 1012 | 23295 |
| lav-ukr | flores101-devtest | 0.48185 | 19.2 | 1012 | 22810 |
| lit-rus | flores101-devtest | 0.49850 | 21.3 | 1012 | 23295 |
| lit-ukr | flores101-devtest | 0.49114 | 19.5 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 00:51:59 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-base-ces_slk-uk | 5dc623121e319ac1d0ad2c0a206e7535560b24f7 | 2022-06-01T13:08:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"cs",
"sk",
"uk",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-ces_slk-uk | 2 | null | transformers | 25,287 | ---
language:
- cs
- sk
- uk
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-base-ces_slk-uk
results:
- task:
name: Translation ces-ukr
type: translation
args: ces-ukr
dataset:
name: flores101-devtest
type: flores_101
args: ces ukr devtest
metrics:
- name: BLEU
type: bleu
value: 21.8
- task:
name: Translation slk-ukr
type: translation
args: slk-ukr
dataset:
name: flores101-devtest
type: flores_101
args: slk ukr devtest
metrics:
- name: BLEU
type: bleu
value: 21.4
- task:
name: Translation ces-ukr
type: translation
args: ces-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ces-ukr
metrics:
- name: BLEU
type: bleu
value: 48.6
---
# opus-mt-tc-base-ces_slk-uk
Neural machine translation model for translating from Czech and Slovak (cs+sk) to Ukrainian (uk).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-08
* source language(s): ces
* target language(s): ukr
* model: transformer-align
* data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pbt_transformer-align_2022-03-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ces+slk-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.zip)
* more information released models: [OPUS-MT ces+slk-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces+slk-ukr/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Replace this with text in an accepted source language.",
"This is the second sentence."
]
model_name = "pytorch-models/opus-mt-tc-base-ces_slk-uk"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-ces_slk-uk")
print(pipe("Replace this with text in an accepted source language."))
```
## Benchmarks
* test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces+slk-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.test.txt)
* test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces+slk-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ces-ukr | tatoeba-test-v2021-08-07 | 0.66867 | 48.6 | 1787 | 8891 |
| ces-ukr | flores101-devtest | 0.51387 | 21.8 | 1012 | 22810 |
| slk-ukr | flores101-devtest | 0.51418 | 21.4 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 01:01:20 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-base-hu-uk | 4e8eca9f90aefb846669be161382d4cba426be5e | 2022-06-01T13:08:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hu",
"uk",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-hu-uk | 2 | null | transformers | 25,288 | ---
language:
- hu
- uk
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-base-hu-uk
results:
- task:
name: Translation hun-ukr
type: translation
args: hun-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hun-ukr
metrics:
- name: BLEU
type: bleu
value: 38.1
---
# opus-mt-tc-base-hu-uk
Neural machine translation model for translating from Hungarian (hu) to Ukrainian (uk).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-08
* source language(s): hun
* target language(s): ukr
* model: transformer-align
* data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pbt_transformer-align_2022-03-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.zip)
* more information released models: [OPUS-MT hun-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hun-ukr/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"1000 dollárral tartozom neked.",
"Vizet iszom."
]
model_name = "pytorch-models/opus-mt-tc-base-hu-uk"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Я зобов'язаний вам 1000 доларів.
# Я п'ю воду.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-hu-uk")
print(pipe("1000 dollárral tartozom neked."))
# expected output: Я зобов'язаний вам 1000 доларів.
```
## Benchmarks
* test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.test.txt)
* test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| hun-ukr | tatoeba-test-v2021-08-07 | 0.61006 | 38.1 | 473 | 2606 |
| hun-ukr | flores101-devtest | 0.49490 | 19.8 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 02:19:16 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-base-tr-uk | deae12bb0d015f4d34bab94c199f24faf9097686 | 2022-06-01T13:02:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tr",
"uk",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-tr-uk | 2 | null | transformers | 25,289 | ---
language:
- tr
- uk
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-base-tr-uk
results:
- task:
name: Translation tur-ukr
type: translation
args: tur-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: tur-ukr
metrics:
- name: BLEU
type: bleu
value: 40.5
---
# opus-mt-tc-base-tr-uk
Neural machine translation model for translating from Turkish (tr) to Ukrainian (uk).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-07
* source language(s):
* target language(s): ukr
* model: transformer-align
* data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pbt_transformer-align_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.zip)
* more information released models: [OPUS-MT tur-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"1000 yen yeterli mi?",
"Zürih, İsviçre'de bir şehirdir."
]
model_name = "pytorch-models/opus-mt-tc-base-tr-uk"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Чи достатньо 1000 ієн?
# Цюрих - місто в Швейцарії.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-tr-uk")
print(pipe("1000 yen yeterli mi?"))
# expected output: Чи достатньо 1000 ієн?
```
## Benchmarks
* test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt)
* test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| tur-ukr | tatoeba-test-v2021-08-07 | 0.63573 | 40.5 | 2520 | 13079 |
| tur-ukr | flores101-devtest | 0.49944 | 19.9 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 03:37:19 EET 2022
* port machine: LM0-400-22516.local
|
Paul-Vinh/bert-base-multilingual-cased-finetuned-squad | 8436b2ee57cfe6b30606ce19d34938351b6e41c2 | 2022-03-24T22:47:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Paul-Vinh | null | Paul-Vinh/bert-base-multilingual-cased-finetuned-squad | 2 | null | transformers | 25,290 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-multilingual-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9982 | 1.0 | 5555 | 0.9436 |
| 0.7694 | 2.0 | 11110 | 0.9356 |
| 0.5627 | 3.0 | 16665 | 1.0122 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Taekyoon/unicon_v0.5.3_beta | 84903824b1f0b7265ae77212f9086b623d996c89 | 2022-03-25T06:08:45.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Taekyoon | null | Taekyoon/unicon_v0.5.3_beta | 2 | null | transformers | 25,291 | Entry not found |
Shunian/wav2vec2-base-960h-finetune | ee13ee0fdfebd8ddf52e61bddadc341de89cc21f | 2022-03-31T05:02:26.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Shunian | null | Shunian/wav2vec2-base-960h-finetune | 2 | null | transformers | 25,292 | Entry not found |
eliasws/openApiT5-to-description-v3 | f0035c559ca86b1f31fd278f9244bd5ea63d5d88 | 2022-03-25T10:19:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eliasws | null | eliasws/openApiT5-to-description-v3 | 2 | null | transformers | 25,293 | Entry not found |
PSW/ut_del_three_per_each_ver1_early_stop | e821f0e83112b702e47eb61de055c48fda53b948 | 2022-03-25T14:48:19.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_three_per_each_ver1_early_stop | 2 | null | transformers | 25,294 | Entry not found |
UWB-AIR/MQDD-pretrained | 28a1d402ef65ebda64c3975009c2b58a8d53bed8 | 2022-04-05T06:14:47.000Z | [
"pytorch",
"longformer",
"feature-extraction",
"arxiv:2203.14093",
"transformers",
"license:cc-by-nc-sa-4.0"
] | feature-extraction | false | UWB-AIR | null | UWB-AIR/MQDD-pretrained | 2 | null | transformers | 25,295 | ---
license: cc-by-nc-sa-4.0
---
# MQDD - Multimodal Question Duplicity Detection
This repository publishes pre-trained model for the paper
[MQDD – Pre-training of Multimodal Question Duplicity Detection for Software Engineering Domain](https://arxiv.org/abs/2203.14093). For more information, see the paper.
The Stack Overflow Datasets (SOD) and Stack Overflow Duplicity Dataset (SODD) presented in the paper can be obtained from our [Stack Overflow Dataset repository](https://github.com/kiv-air/StackOverflowDataset).
To acquire the fine-tuned model, see [UWB-AIR/MQDD-duplicate](https://huggingface.co/UWB-AIR/MQDD-duplicates).
The MQDD model, which is based on a Longformer architecture and is pre-trained on 218.5M training examples. The model was trained using MLM training objective accompanied with our novel Same Post (SP) and Question Answer (QA) learning objectives targeting specifically the duplicate detection task.
The model can be loaded using the following source code snippet:
```Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("UWB-AIR/MQDD-pretrained")
model = AutoModel.from_pretrained("UWB-AIR/MQDD-pretrained")
```
## Licence
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/
## How should I cite the MQDD?
For now, please cite [the Arxiv paper](https://arxiv.org/abs/2203.14093):
```
@misc{https://doi.org/10.48550/arxiv.2203.14093,
doi = {10.48550/ARXIV.2203.14093},
url = {https://arxiv.org/abs/2203.14093},
author = {Pašek, Jan and Sido, Jakub and Konopík, Miloslav and Pražák, Ondřej},
title = {MQDD -- Pre-training of Multimodal Question Duplicity Detection for Software Engineering Domain},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
IsaacBot/t5-small-finetuned-qa-google-en-question_v1 | 937edb7de6cf080a9b68ee636937f87fef63a2fd | 2022-03-25T20:33:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | IsaacBot | null | IsaacBot/t5-small-finetuned-qa-google-en-question_v1 | 2 | null | transformers | 25,296 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-qa-google-en-question_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-qa-google-en-question_v1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1358
- Rouge1: 49.6232
- Rouge2: 26.4156
- Rougel: 46.9194
- Rougelsum: 46.8814
- Gen Len: 13.5795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 0.27 | 100 | 3.5967 | 43.7809 | 21.3303 | 41.6782 | 41.6869 | 12.9745 |
| No log | 0.53 | 200 | 3.4539 | 45.7744 | 22.9574 | 43.4412 | 43.4249 | 13.416 |
| No log | 0.8 | 300 | 3.3771 | 47.1053 | 24.1406 | 44.6092 | 44.6051 | 13.386 |
| No log | 1.06 | 400 | 3.3229 | 47.5933 | 24.7048 | 45.086 | 45.1266 | 13.4725 |
| 3.6954 | 1.33 | 500 | 3.2851 | 47.8847 | 24.7439 | 45.322 | 45.3243 | 13.5975 |
| 3.6954 | 1.6 | 600 | 3.2570 | 48.1836 | 25.3062 | 45.6641 | 45.6346 | 13.5955 |
| 3.6954 | 1.86 | 700 | 3.2321 | 48.7604 | 25.7254 | 46.1789 | 46.1537 | 13.476 |
| 3.6954 | 2.13 | 800 | 3.2140 | 48.7518 | 25.639 | 46.2817 | 46.2343 | 13.5855 |
| 3.6954 | 2.39 | 900 | 3.1963 | 49.0046 | 25.8439 | 46.4097 | 46.3732 | 13.6855 |
| 3.3928 | 2.66 | 1000 | 3.1844 | 49.3227 | 26.0336 | 46.7032 | 46.6402 | 13.557 |
| 3.3928 | 2.93 | 1100 | 3.1736 | 49.4069 | 26.0619 | 46.691 | 46.6406 | 13.5475 |
| 3.3928 | 3.19 | 1200 | 3.1630 | 49.4614 | 26.1224 | 46.7679 | 46.7416 | 13.614 |
| 3.3928 | 3.46 | 1300 | 3.1556 | 49.7542 | 26.4413 | 47.0601 | 47.0201 | 13.625 |
| 3.3928 | 3.72 | 1400 | 3.1500 | 49.4097 | 26.1732 | 46.7324 | 46.6833 | 13.6795 |
| 3.3144 | 3.99 | 1500 | 3.1440 | 49.5359 | 26.3478 | 46.8079 | 46.7769 | 13.604 |
| 3.3144 | 4.26 | 1600 | 3.1406 | 49.8245 | 26.5312 | 47.1247 | 47.0744 | 13.552 |
| 3.3144 | 4.52 | 1700 | 3.1378 | 49.6884 | 26.4023 | 46.9501 | 46.9063 | 13.5785 |
| 3.3144 | 4.79 | 1800 | 3.1358 | 49.6232 | 26.4156 | 46.9194 | 46.8814 | 13.5795 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
pinecone/msmarco-distilbert-base-tas-b-covid | 1cd431029aa2ba55d0523c8813f11869be0a63f6 | 2022-03-25T18:30:52.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | pinecone | null | pinecone/msmarco-distilbert-base-tas-b-covid | 2 | null | sentence-transformers | 25,297 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6250 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6250,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huggingtweets/huggingpuppy | a396a0293dccb047ff17d222f36b1886b9e8f2e2 | 2022-03-25T18:42:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/huggingpuppy | 2 | null | transformers | 25,298 | ---
language: en
thumbnail: http://www.huggingtweets.com/huggingpuppy/1648233768787/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1504530325526900756/QOTZak3q_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">hug. (INGROUP INTERN)</div>
<div style="text-align: center; font-size: 14px;">@huggingpuppy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from hug. (INGROUP INTERN).
| Data | hug. (INGROUP INTERN) |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 97 |
| Short tweets | 816 |
| Tweets kept | 2336 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wq0kiqq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @huggingpuppy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3aonv9kh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3aonv9kh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/huggingpuppy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ahmeddbahaa/mt5-finetuned-en-ar | c3c32629c56f98dffeeb2d794a2c4d6feb636793 | 2022-03-26T02:24:12.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | ahmeddbahaa | null | ahmeddbahaa/mt5-finetuned-en-ar | 2 | 1 | transformers | 25,299 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: mt5-finetuned-en-ar
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
args: arabic
metrics:
- name: Rouge1
type: rouge
value: 0.2824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-finetuned-en-ar
This model is a fine-tuned version of [ahmeddbahaa/mt5-small-finetuned-mt5-en](https://huggingface.co/ahmeddbahaa/mt5-small-finetuned-mt5-en) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2314
- Rouge1: 0.2824
- Rouge2: 0.0
- Rougel: 0.2902
- Rougelsum: 0.298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.1685 | 1.0 | 4130 | 2.4262 | 0.0941 | 0.0235 | 0.1098 | 0.1098 |
| 2.686 | 2.0 | 8260 | 2.2853 | 0.2824 | 0.0 | 0.298 | 0.298 |
| 2.481 | 3.0 | 12390 | 2.2314 | 0.2824 | 0.0 | 0.2902 | 0.298 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.