modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GPL/dbpedia-entity-tsdae-msmarco-distilbert-gpl | 799bc403f77c354a5a46a676e6c79f9c6e434b11 | 2022-04-19T15:22:14.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/dbpedia-entity-tsdae-msmarco-distilbert-gpl | 2 | null | sentence-transformers | 25,600 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
GPL/fever-tsdae-msmarco-distilbert-gpl | a9b282bf911e6effe4b3621f0bcc54cf8cecc4ba | 2022-04-19T15:47:15.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/fever-tsdae-msmarco-distilbert-gpl | 2 | null | sentence-transformers | 25,601 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
GPL/arguana-tsdae-msmarco-distilbert-margin-mse | 5b7202bd768f28aa34e1f2341fc2d7688151fbcd | 2022-04-19T16:42:41.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | GPL | null | GPL/arguana-tsdae-msmarco-distilbert-margin-mse | 2 | null | transformers | 25,602 | Entry not found |
GPL/climate-fever-tsdae-msmarco-distilbert-margin-mse | 80249e117213e1e69753a85733150d56811aa0f5 | 2022-04-19T16:43:00.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | GPL | null | GPL/climate-fever-tsdae-msmarco-distilbert-margin-mse | 2 | null | transformers | 25,603 | Entry not found |
GPL/nfcorpus-tsdae-msmarco-distilbert-margin-mse | a56e544271ad54b243b098739118e4d01b16ac97 | 2022-04-19T16:44:28.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | GPL | null | GPL/nfcorpus-tsdae-msmarco-distilbert-margin-mse | 2 | null | transformers | 25,604 | Entry not found |
robkayinto/xlm-roberta-base-finetuned-panx-it | f770d6c2566d55ba621bae7a31191c129b5033dc | 2022-07-13T18:36:07.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | robkayinto | null | robkayinto/xlm-roberta-base-finetuned-panx-it | 2 | null | transformers | 25,605 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8245828245828245
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2401
- F1: 0.8246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8187 | 1.0 | 70 | 0.3325 | 0.7337 |
| 0.2829 | 2.0 | 140 | 0.2554 | 0.8003 |
| 0.1894 | 3.0 | 210 | 0.2401 | 0.8246 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
robkayinto/xlm-roberta-base-finetuned-panx-en | 3086f340449baca90908c65af9d8ef44b323e71c | 2022-07-13T19:00:03.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | robkayinto | null | robkayinto/xlm-roberta-base-finetuned-panx-en | 2 | null | transformers | 25,606 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.7032474804031354
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3932
- F1: 0.7032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1504 | 1.0 | 50 | 0.5992 | 0.4786 |
| 0.5147 | 2.0 | 100 | 0.4307 | 0.6468 |
| 0.3717 | 3.0 | 150 | 0.3932 | 0.7032 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
robkayinto/xlm-roberta-base-finetuned-panx-all | 9672f1f81e44fe7ee5bb0faac1dd2f9d1cec434b | 2022-07-13T19:31:35.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | robkayinto | null | robkayinto/xlm-roberta-base-finetuned-panx-all | 2 | null | transformers | 25,607 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1757
- F1: 0.8513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2986 | 1.0 | 835 | 0.1939 | 0.8077 |
| 0.1547 | 2.0 | 1670 | 0.1813 | 0.8351 |
| 0.1003 | 3.0 | 2505 | 0.1757 | 0.8513 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
adalbertojunior/db-msm | cea6375919502d6eb86bb0f53243ed6c05027a46 | 2022-04-19T18:56:46.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | adalbertojunior | null | adalbertojunior/db-msm | 2 | null | sentence-transformers | 25,608 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 147625 with parameters:
```
{'batch_size': 512, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
michaellutz/roberta-base-prop-16-train-set | 3a5fe3db93aff7391b9e52c3873cf108e31e4894 | 2022-07-01T18:35:08.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | michaellutz | null | michaellutz/roberta-base-prop-16-train-set | 2 | null | transformers | 25,609 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-prop-16-train-set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-prop-16-train-set
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
soniais/distilbert-base-uncased-finetuned-mnli | 00306be0fc06a445760ae0895753bf41186dd252 | 2022-04-21T02:21:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | soniais | null | soniais/distilbert-base-uncased-finetuned-mnli | 2 | null | transformers | 25,610 | Entry not found |
joniponi/multilabel_inpatient_comments_23labels | 1c7e6b9943be9090f41af3045ffce89d4e504c40 | 2022-04-20T03:15:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | joniponi | null | joniponi/multilabel_inpatient_comments_23labels | 2 | null | transformers | 25,611 | Entry not found |
PSW/max_sim_del_seed27 | ddbad9ec0677971287240c563f1cfbc1db47f412 | 2022-04-20T03:06:40.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/max_sim_del_seed27 | 2 | null | transformers | 25,612 | Entry not found |
PSW/half_sim_del_seed27 | be1cd196bc37b39287914e64933f2a2f184dc019 | 2022-04-20T07:42:07.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/half_sim_del_seed27 | 2 | null | transformers | 25,613 | Entry not found |
eagles/focus_sum_gpt2 | d5c8f05f33d8f40cbfaddd5cd1caed6ffc8becf8 | 2022-04-20T09:03:21.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | eagles | null | eagles/focus_sum_gpt2 | 2 | null | transformers | 25,614 | ---
tags:
- generated_from_trainer
model-index:
- name: focus_sum_gpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# focus_sum_gpt2
This model is a fine-tuned version of [uer/gpt2-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-chinese-cluecorpussmall) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5 | 15.15 | 500 | 1.1917 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AswiN037/tamil-Roberta-small | 4c146c1b40110f6031965b416370d69c6e53663f | 2022-04-20T08:43:52.000Z | [
"pytorch",
"roberta",
"fill-mask",
"Tamil",
"dataset:oscar",
"transformers",
"Tamil-Tokenizer",
"Tamil-language-model",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | AswiN037 | null | AswiN037/tamil-Roberta-small | 2 | null | transformers | 25,615 | ---
language:
- Tamil
tags:
- Tamil-Tokenizer
- Tamil-language-model
license: "apache-2.0"
datasets:
- oscar
---
# tokenizer - BPE 30_522 vocab size
## model - Roberta
trained using MLM
OSCAR dataset
train data size 5000 lines olly |
patrickvonplaten/data2vec-audio-base-100h-4-gram | f70046586ea3baec83b53c82575333e9e112dcd0 | 2022-04-21T10:39:14.000Z | [
"pytorch",
"data2vec-audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2202.03555",
"transformers",
"speech",
"license:apache-2.0"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/data2vec-audio-base-100h-4-gram | 2 | null | transformers | 25,616 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Data2Vec-Audio-Base-100h
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The base model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Data2VecForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-100h")
model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-100h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
|
riccard/distilbert-base-uncased-finetuned-cola | 186002d51aa93a426fc1d53b4746f807ab29ee08 | 2022-04-20T13:32:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | riccard | null | riccard/distilbert-base-uncased-finetuned-cola | 2 | null | transformers | 25,617 | Entry not found |
Matthijs/test-gpt2 | 26b29fc3afc829d9c52a5a6229c6d28b1343c2a5 | 2022-05-11T07:23:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Matthijs | null | Matthijs/test-gpt2 | 2 | null | transformers | 25,618 | This is an empty model card! |
Tejas21/Totto_t5_base_BLEURT_24k_steps | 1ac421c983b730ec3365d71fc310c03cd879ca91 | 2022-04-21T18:43:02.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2004.04696",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | Tejas21 | null | Tejas21/Totto_t5_base_BLEURT_24k_steps | 2 | null | transformers | 25,619 | ---
license: apache-2.0
---
language:
- en
tags:
- Table to text
- Data to text
## Dataset:
- [ToTTo](https://github.com/google-research-datasets/ToTTo)
A Controlled Table-to-Text Dataset. Totto is an open-source table-to-text dataset with over 1,20,000 examples in the English language. It defines a controlled generation task as: given a Wikipedia table and a set of highlighted cells, generate a one-sentence description.
## Base Model - T5-Base
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
The T5 was built by the Google team in order to create a general-purpose model that can understand the text. The basic idea behind t5 was to deal with the text processing problem as a “text-to-text” problem, i.e. taking the text as input and producing new text as output.
## Baseline Preprocessing
[Baseline Preprocessing](https://github.com/google-research/language/tree/master/language/totto)
This code repository serves as a supplementary for the main repository, which can be used to do basic preprocessing of the Totto dataset.
## Fine-tuning
We used the T5 for the conditional generation model to fine-tune with, 24000 steps with the ToTTo dataset using [BLEURT](https://arxiv.org/abs/2004.04696) as a metric.
|
shkim/distilbert-base-uncased-finetuned-imdb | 1135b496b814957514605c5358418fe6da4ee1b4 | 2022-04-20T14:21:24.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | shkim | null | shkim/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 25,620 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7117 | 1.0 | 157 | 2.4977 |
| 2.5783 | 2.0 | 314 | 2.4241 |
| 2.5375 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Wootang01/xlm-roberta-base-finetuned-hkdse-english-paper4 | 23308027f99fb06bb7e51ac693d0975b71eb8547 | 2022-04-21T01:24:16.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Wootang01 | null | Wootang01/xlm-roberta-base-finetuned-hkdse-english-paper4 | 2 | null | transformers | 25,621 | Entry not found |
4m1g0/wav2vec2-large-xls-r-300m-gl-jupyter6 | e352efe2f14a042da4b19367b3b29bfcbe6a047a | 2022-04-20T21:43:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 4m1g0 | null | 4m1g0/wav2vec2-large-xls-r-300m-gl-jupyter6 | 2 | null | transformers | 25,622 | Entry not found |
frozenwalker/SciFive_pubmedqa_question_generation_using_prompt_entity | 3ffaf306b9024b7b3491d7115b459648d818eb16 | 2022-04-20T18:15:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | frozenwalker | null | frozenwalker/SciFive_pubmedqa_question_generation_using_prompt_entity | 2 | null | transformers | 25,623 | Entry not found |
brad1141/Bert_v5 | e22d2c0f0770604314ab748f064a448776ac2053 | 2022-04-20T22:23:00.000Z | [
"pytorch",
"tensorboard",
"longformer",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | brad1141 | null | brad1141/Bert_v5 | 2 | null | transformers | 25,624 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Bert_v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert_v5
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9191
- Precision: 0.7612
- Recall: 0.8007
- F1: 0.5106
- Accuracy: 0.7357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.0663 | 1.0 | 934 | 0.8636 | 0.6973 | 0.8467 | 0.4082 | 0.7023 |
| 0.8354 | 2.0 | 1868 | 0.8261 | 0.7367 | 0.8086 | 0.4733 | 0.7221 |
| 0.7164 | 3.0 | 2802 | 0.7737 | 0.7572 | 0.7988 | 0.5055 | 0.7347 |
| 0.6149 | 4.0 | 3736 | 0.7542 | 0.7488 | 0.8402 | 0.5176 | 0.7438 |
| 0.5153 | 5.0 | 4670 | 0.8185 | 0.7614 | 0.8123 | 0.5017 | 0.7389 |
| 0.4314 | 6.0 | 5604 | 0.8599 | 0.7543 | 0.8259 | 0.5085 | 0.7395 |
| 0.3689 | 7.0 | 6538 | 0.9191 | 0.7612 | 0.8007 | 0.5106 | 0.7357 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
albertus-casm/domlm-finetuned-cleaneval | 738e55a7c171f7c3232e52def0085015e9431bb8 | 2022-04-20T19:59:36.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | albertus-casm | null | albertus-casm/domlm-finetuned-cleaneval | 2 | null | transformers | 25,625 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: domlm-finetuned-cleaneval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# domlm-finetuned-cleaneval
This model is a fine-tuned version of [albertus-casm/dom-lm-epoch-3](https://huggingface.co/albertus-casm/dom-lm-epoch-3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2098
- Precision: 0.9438
- Recall: 0.9559
- F1: 0.9498
- Accuracy: 0.9412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 111 | 0.2472 | 0.9163 | 0.9636 | 0.9394 | 0.9276 |
| No log | 2.0 | 222 | 0.1798 | 0.9313 | 0.9671 | 0.9489 | 0.9393 |
| No log | 3.0 | 333 | 0.2098 | 0.9438 | 0.9559 | 0.9498 | 0.9412 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.1.1.dev0
- Tokenizers 0.12.1
|
juancavallotti/roberta-base-culinary | 9b9dfa39c2c3e0d5ed05a1ce44932407738ff812 | 2022-04-28T09:35:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | juancavallotti | null | juancavallotti/roberta-base-culinary | 2 | null | transformers | 25,626 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-culinary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-culinary
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.3285 | 1.0 | 72679 | 1.2932 |
| 1.1837 | 2.0 | 145358 | nan |
| 1.1466 | 3.0 | 218037 | 1.1180 |
| 1.0833 | 4.0 | 290716 | 1.0820 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Rumesh/mbart-tr-simp | c063cc85356056cff285ff2c5269886d016e717c | 2022-04-21T02:15:01.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Rumesh | null | Rumesh/mbart-tr-simp | 2 | null | transformers | 25,627 | Entry not found |
4m1g0/wav2vec2-large-xls-r-300m-gl-jupyter8 | 338200d8fcb7d8915ecc69be0102066ef3ae83a4 | 2022-04-21T07:48:03.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 4m1g0 | null | 4m1g0/wav2vec2-large-xls-r-300m-gl-jupyter8 | 2 | null | transformers | 25,628 | Entry not found |
SnailPoo/distilbert-base-uncased-finetuned-ner | 56ef9eae6a9dc2673043cf921fa0c8c4b88d5800 | 2022-04-21T08:27:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | SnailPoo | null | SnailPoo/distilbert-base-uncased-finetuned-ner | 2 | null | transformers | 25,629 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1079
- Precision: 0.8408
- Recall: 0.8686
- F1: 0.8545
- Accuracy: 0.9638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 453 | 0.1322 | 0.7759 | 0.8370 | 0.8053 | 0.9498 |
| 0.246 | 2.0 | 906 | 0.1115 | 0.8284 | 0.8616 | 0.8446 | 0.9611 |
| 0.1012 | 3.0 | 1359 | 0.1079 | 0.8408 | 0.8686 | 0.8545 | 0.9638 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
4m1g0/wav2vec2-large-xls-r-300m-gl-jupyter9 | 12c208e4ea7696d7c7be5111a18b524a622db8db | 2022-04-21T19:45:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | 4m1g0 | null | 4m1g0/wav2vec2-large-xls-r-300m-gl-jupyter9 | 2 | null | transformers | 25,630 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-gl-jupyter9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-gl-jupyter9
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0970
- Wer: 0.0624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6977 | 3.36 | 400 | 0.4273 | 0.4574 |
| 0.2282 | 6.72 | 800 | 0.1492 | 0.1723 |
| 0.0884 | 10.08 | 1200 | 0.1344 | 0.1336 |
| 0.0594 | 13.44 | 1600 | 0.1329 | 0.1238 |
| 0.0437 | 16.8 | 2000 | 0.1137 | 0.1153 |
| 0.0384 | 20.17 | 2400 | 0.1197 | 0.1033 |
| 0.0332 | 23.53 | 2800 | 0.1147 | 0.0980 |
| 0.0282 | 26.89 | 3200 | 0.1079 | 0.0917 |
| 0.0236 | 30.25 | 3600 | 0.1144 | 0.0922 |
| 0.0237 | 33.61 | 4000 | 0.1130 | 0.0880 |
| 0.019 | 36.97 | 4400 | 0.1035 | 0.0818 |
| 0.0164 | 40.33 | 4800 | 0.1045 | 0.0813 |
| 0.0146 | 43.69 | 5200 | 0.1037 | 0.0735 |
| 0.0111 | 47.06 | 5600 | 0.1085 | 0.0701 |
| 0.0093 | 50.42 | 6000 | 0.1039 | 0.0659 |
| 0.0084 | 53.78 | 6400 | 0.0970 | 0.0636 |
| 0.0073 | 57.14 | 6800 | 0.0970 | 0.0624 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
4m1g0/wav2vec2-large-xls-r-53m-gl-jupyter7 | 632ef71a82f03629920963ba6b7949cba7360a13 | 2022-04-21T19:55:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | 4m1g0 | null | 4m1g0/wav2vec2-large-xls-r-53m-gl-jupyter7 | 2 | null | transformers | 25,631 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-53m-gl-jupyter7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-53m-gl-jupyter7
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1000
- Wer: 0.0639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8697 | 3.36 | 400 | 0.2631 | 0.2756 |
| 0.1569 | 6.72 | 800 | 0.1243 | 0.1300 |
| 0.0663 | 10.08 | 1200 | 0.1124 | 0.1153 |
| 0.0468 | 13.44 | 1600 | 0.1118 | 0.1037 |
| 0.0356 | 16.8 | 2000 | 0.1102 | 0.0978 |
| 0.0306 | 20.17 | 2400 | 0.1095 | 0.0935 |
| 0.0244 | 23.53 | 2800 | 0.1072 | 0.0844 |
| 0.0228 | 26.89 | 3200 | 0.1014 | 0.0874 |
| 0.0192 | 30.25 | 3600 | 0.1084 | 0.0831 |
| 0.0174 | 33.61 | 4000 | 0.1048 | 0.0772 |
| 0.0142 | 36.97 | 4400 | 0.1063 | 0.0764 |
| 0.0131 | 40.33 | 4800 | 0.1046 | 0.0770 |
| 0.0116 | 43.69 | 5200 | 0.0999 | 0.0716 |
| 0.0095 | 47.06 | 5600 | 0.1044 | 0.0729 |
| 0.0077 | 50.42 | 6000 | 0.1024 | 0.0670 |
| 0.0071 | 53.78 | 6400 | 0.0968 | 0.0631 |
| 0.0064 | 57.14 | 6800 | 0.1000 | 0.0639 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
MeshalAlamr/wav2vec2-xls-r-300m-ar-3 | 867645be8e3a8c29c9be905e6e7c0b8d54b6f723 | 2022-04-24T11:08:47.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | MeshalAlamr | null | MeshalAlamr/wav2vec2-xls-r-300m-ar-3 | 2 | null | transformers | 25,632 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-ar-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-ar-3
This model is a fine-tuned version of [MeshalAlamr/wav2vec2-xls-r-300m-ar-2](https://huggingface.co/MeshalAlamr/wav2vec2-xls-r-300m-ar-2) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5567
- Wer: 0.3115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1654 | 1.18 | 400 | 0.5815 | 0.4237 |
| 0.3412 | 2.35 | 800 | 0.5534 | 0.4479 |
| 0.4661 | 1.77 | 1200 | 0.6339 | 0.4915 |
| 0.441 | 2.36 | 1600 | 0.6435 | 0.5016 |
| 0.3273 | 5.88 | 2000 | 0.5338 | 0.4361 |
| 0.3099 | 7.06 | 2400 | 0.5570 | 0.4303 |
| 0.2833 | 8.24 | 2800 | 0.5731 | 0.4427 |
| 0.2714 | 9.41 | 3200 | 0.5551 | 0.4212 |
| 0.2598 | 10.59 | 3600 | 0.5757 | 0.4214 |
| 0.2458 | 11.76 | 4000 | 0.5269 | 0.4065 |
| 0.2316 | 12.94 | 4400 | 0.5469 | 0.4053 |
| 0.219 | 14.12 | 4800 | 0.5539 | 0.3912 |
| 0.2022 | 15.29 | 5200 | 0.5773 | 0.3887 |
| 0.1771 | 16.47 | 5600 | 0.5374 | 0.3623 |
| 0.176 | 17.65 | 6000 | 0.5545 | 0.3763 |
| 0.1645 | 18.82 | 6400 | 0.5332 | 0.3580 |
| 0.1501 | 20.0 | 6800 | 0.5496 | 0.3614 |
| 0.1372 | 21.18 | 7200 | 0.5716 | 0.3608 |
| 0.1325 | 22.35 | 7600 | 0.5476 | 0.3475 |
| 0.1233 | 23.53 | 8000 | 0.5657 | 0.3412 |
| 0.1148 | 24.71 | 8400 | 0.5399 | 0.3324 |
| 0.1058 | 25.88 | 8800 | 0.5678 | 0.3323 |
| 0.1004 | 27.06 | 9200 | 0.5648 | 0.3252 |
| 0.0953 | 28.24 | 9600 | 0.5594 | 0.3159 |
| 0.0875 | 29.41 | 10000 | 0.5567 | 0.3115 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.11.0
- Datasets 1.18.3
- Tokenizers 0.10.3
|
joniponi/multilabel_inpatient_comments_14labels | 8f08eae6fceaa5a8fa9d9257a2f440846d6d1312 | 2022-04-21T16:01:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | joniponi | null | joniponi/multilabel_inpatient_comments_14labels | 2 | null | transformers | 25,633 | Entry not found |
jackmleitch/distilbert-base-uncased-distilled-clinc | 22c6ad8e4ba0e75f153d3dc782e28c38f31bc1cf | 2022-04-21T20:04:39.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jackmleitch | null | jackmleitch/distilbert-base-uncased-distilled-clinc | 2 | null | transformers | 25,634 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9432258064516129
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1004
- Accuracy: 0.9432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9044 | 1.0 | 318 | 0.5748 | 0.7390 |
| 0.4491 | 2.0 | 636 | 0.2876 | 0.88 |
| 0.2538 | 3.0 | 954 | 0.1813 | 0.9229 |
| 0.1765 | 4.0 | 1272 | 0.1388 | 0.9294 |
| 0.1422 | 5.0 | 1590 | 0.1214 | 0.9345 |
| 0.1243 | 6.0 | 1908 | 0.1114 | 0.9406 |
| 0.1138 | 7.0 | 2226 | 0.1066 | 0.94 |
| 0.1076 | 8.0 | 2544 | 0.1030 | 0.9423 |
| 0.104 | 9.0 | 2862 | 0.1010 | 0.9419 |
| 0.1019 | 10.0 | 3180 | 0.1004 | 0.9432 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
masakhane/byt5_ibo_en_news | bfa46ff173f2e363424b49c45d27cefb0bf6b05f | 2022-04-22T11:48:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/byt5_ibo_en_news | 2 | null | transformers | 25,635 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_en_ibo_news | 6825281f6b7c0c34ceec6ce2711b6817051bc9eb | 2022-04-22T12:45:15.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_ibo_news | 2 | null | transformers | 25,636 | ---
license: afl-3.0
---
|
ngwlh/wav2vec2-base-ft-keyword-spotting | ce8b341352ded907301c6e1ac93d140cf30326f8 | 2022-04-22T04:23:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | ngwlh | null | ngwlh/wav2vec2-base-ft-keyword-spotting | 2 | null | transformers | 25,637 | Entry not found |
DioLiu/SimpleDataset | 0f2e840e3fb1de1701323b2939a20f0f2b9a9ac3 | 2022-04-22T07:36:10.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | DioLiu | null | DioLiu/SimpleDataset | 2 | null | transformers | 25,638 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SimpleDataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SimpleDataset
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 3.3454 |
| No log | 2.0 | 8 | 3.8818 |
| No log | 3.0 | 12 | 3.6762 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Wootang01/distilgpt2-finetuned-hkdse-english-paper4 | 38a5ac9760cd7bcd25f79cb02dfd935bb5285a40 | 2022-04-22T13:22:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Wootang01 | null | Wootang01/distilgpt2-finetuned-hkdse-english-paper4 | 2 | null | transformers | 25,639 | Entry not found |
santoshsawant/distilbert-base-uncased-finetuned-emotion | 6e67a866f5834fe3989864e595b5a8013b619012 | 2022-04-22T11:46:46.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | santoshsawant | null | santoshsawant/distilbert-base-uncased-finetuned-emotion | 2 | null | transformers | 25,640 | Entry not found |
Tlacaelel/DialoGPT-small-jarvis | 3fcb269191c7748facb5fe064dcc4a21d544c92f | 2022-04-22T11:59:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Tlacaelel | null | Tlacaelel/DialoGPT-small-jarvis | 2 | null | transformers | 25,641 | ---
tags:
- conversational
---
# JARVIS DialoGPT Model |
nqcccccc/absa-phobert-qab-vslp-res | 98bb465dd3febb8273a521c50b2c1cde47daad0f | 2022-04-23T01:13:35.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | nqcccccc | null | nqcccccc/absa-phobert-qab-vslp-res | 2 | null | transformers | 25,642 | Entry not found |
robkayinto/pegasus-samsum | dec809820098abc58cdef200c3d67f0564e2b843 | 2022-04-22T17:38:32.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | robkayinto | null | robkayinto/pegasus-samsum | 2 | null | transformers | 25,643 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.702 | 0.54 | 500 | 1.4874 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
kilimandjaro/camembert-base-sentiment | 18ffb6b05d1080020e25a92d0cfa26ed5f5288cb | 2022-04-26T23:13:38.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
] | text-classification | false | kilimandjaro | null | kilimandjaro/camembert-base-sentiment | 2 | null | transformers | 25,644 | WIP, not working yet |
princeton-nlp/efficient_mlm_m0.40-801010 | df4bf41e0da5d9e5725d8480b54aa37a3fc2804e | 2022-04-27T18:54:21.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.08005",
"transformers",
"autotrain_compatible"
] | fill-mask | false | princeton-nlp | null | princeton-nlp/efficient_mlm_m0.40-801010 | 2 | null | transformers | 25,645 | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
Enoch2090/MAGI | b840c21333bc00cc9d3afe0011b3c5c8c54c3667 | 2022-04-23T07:58:21.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | Enoch2090 | null | Enoch2090/MAGI | 2 | null | sentence-transformers | 25,646 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 14979 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 4493,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
sunminyu/KE-Blender | 26a8e624da3ba6e6b4e198082f2b9f4ef41dce72 | 2022-04-24T06:38:07.000Z | [
"pytorch",
"blenderbot-small",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sunminyu | null | sunminyu/KE-Blender | 2 | null | transformers | 25,647 | Entry not found |
adityay1221/Xegho.30.2 | 6b09551d1659097747eb8d27b2dc72d4ea26daaa | 2022-04-23T12:27:02.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | adityay1221 | null | adityay1221/Xegho.30.2 | 2 | null | transformers | 25,648 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Xegho.30.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Xegho.30.2
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1632
- Bleu: 91.1608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 121
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0.6 | 100 | 1.4767 | 23.2970 |
| No log | 1.19 | 200 | 1.0102 | 34.8884 |
| No log | 1.79 | 300 | 0.7680 | 39.9415 |
| No log | 2.38 | 400 | 0.6129 | 42.4210 |
| 1.2252 | 2.98 | 500 | 0.4962 | 46.5901 |
| 1.2252 | 3.57 | 600 | 0.4293 | 48.9165 |
| 1.2252 | 4.17 | 700 | 0.3698 | 50.4146 |
| 1.2252 | 4.76 | 800 | 0.3194 | 52.4519 |
| 1.2252 | 5.36 | 900 | 0.2836 | 53.0005 |
| 0.463 | 5.95 | 1000 | 0.2635 | 55.5264 |
| 0.463 | 6.55 | 1100 | 0.2444 | 57.7553 |
| 0.463 | 7.14 | 1200 | 0.2262 | 60.8152 |
| 0.463 | 7.74 | 1300 | 0.2205 | 60.3349 |
| 0.463 | 8.33 | 1400 | 0.2082 | 62.5781 |
| 0.297 | 8.93 | 1500 | 0.2045 | 62.9341 |
| 0.297 | 9.52 | 1600 | 0.1969 | 63.9225 |
| 0.297 | 10.12 | 1700 | 0.1939 | 63.9559 |
| 0.297 | 10.71 | 1800 | 0.1842 | 66.0123 |
| 0.297 | 11.31 | 1900 | 0.1836 | 65.7767 |
| 0.2403 | 11.9 | 2000 | 0.1807 | 65.1204 |
| 0.2403 | 12.5 | 2100 | 0.1778 | 65.5556 |
| 0.2403 | 13.1 | 2200 | 0.1753 | 66.2715 |
| 0.2403 | 13.69 | 2300 | 0.1728 | 67.0917 |
| 0.2403 | 14.29 | 2400 | 0.1716 | 67.2965 |
| 0.1976 | 14.88 | 2500 | 0.1719 | 66.5856 |
| 0.1976 | 15.48 | 2600 | 0.1706 | 66.7707 |
| 0.1976 | 16.07 | 2700 | 0.1698 | 66.8323 |
| 0.1976 | 16.67 | 2800 | 0.1705 | 66.8579 |
| 0.1976 | 17.26 | 2900 | 0.1663 | 67.3175 |
| 0.1747 | 17.86 | 3000 | 0.1671 | 68.2097 |
| 0.1747 | 18.45 | 3100 | 0.1681 | 68.1515 |
| 0.1747 | 19.05 | 3200 | 0.1650 | 68.6221 |
| 0.1747 | 19.64 | 3300 | 0.1643 | 68.6828 |
| 0.1747 | 20.24 | 3400 | 0.1662 | 68.9329 |
| 0.1626 | 20.83 | 3500 | 0.1644 | 68.9651 |
| 0.1626 | 21.43 | 3600 | 0.1660 | 68.6685 |
| 0.1626 | 22.02 | 3700 | 0.1640 | 68.7471 |
| 0.1626 | 22.62 | 3800 | 0.1630 | 68.6685 |
| 0.1626 | 23.21 | 3900 | 0.1637 | 68.6835 |
| 0.1437 | 23.81 | 4000 | 0.1632 | 68.5208 |
| 0.1437 | 24.4 | 4100 | 0.1640 | 68.5059 |
| 0.1437 | 25.0 | 4200 | 0.1645 | 68.5059 |
| 0.1437 | 25.6 | 4300 | 0.1639 | 68.5059 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.1.0
- Tokenizers 0.12.1
|
HankyStyle/Multi-ling-BERT | d2dc7f21cfad763ad72ce34b464aeaf7abdb42b2 | 2022-04-24T08:19:01.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | HankyStyle | null | HankyStyle/Multi-ling-BERT | 2 | null | transformers | 25,649 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Multi-ling-BERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Multi-ling-BERT
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6 |
nqcccccc/absa-bertmulti-qab-vslp-res | ab31b2f6fa7dee4ebe00910ea8b394ca884d74b6 | 2022-04-25T11:51:54.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | nqcccccc | null | nqcccccc/absa-bertmulti-qab-vslp-res | 2 | null | transformers | 25,650 | Entry not found |
scasutt/wav2vec2-base_toy_train_data_only_augmented | 44bde348c82c1228abc4fd78c0df4de4bfe6e9ac | 2022-04-24T07:43:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-base_toy_train_data_only_augmented | 2 | null | transformers | 25,651 | Entry not found |
Wootang01/bert-large-cased-finetuned-hkdse-english-paper4 | f2dfb8e0d02d302dcc2dda490ee2d950ab0ffa73 | 2022-04-23T13:43:08.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Wootang01 | null | Wootang01/bert-large-cased-finetuned-hkdse-english-paper4 | 2 | 1 | transformers | 25,652 | Entry not found |
Raffay/speech_processing_project_wav2vec2 | 4b4006960f13d408db9997136c7346b5edff16c2 | 2022-04-23T16:04:14.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Raffay | null | Raffay/speech_processing_project_wav2vec2 | 2 | null | transformers | 25,653 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: speech_processing_project_wav2vec2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speech_processing_project_wav2vec2
This model is a fine-tuned version of [kingabzpro/wav2vec2-urdu](https://huggingface.co/kingabzpro/wav2vec2-urdu) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
allenai/aspire-contextualsentence-singlem-compsci | 8b0c408f47727716ac550680345ea7a63b317d8c | 2022-04-24T20:06:39.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2111.08366",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | allenai | null | allenai/aspire-contextualsentence-singlem-compsci | 2 | null | transformers | 25,654 | ---
license: apache-2.0
---
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `tsAspire` and represents the papers proposed multi-vector model for fine-grained scientific document similarity.
## Model Card
### Model description
This model is a BERT based multi-vector model trained for fine-grained similarity of computer science papers. This model inputs the title and abstract of a paper and represents a paper with a contextual sentence vectors obtained by averaging the token representations of individual sentences - the whole title and abstract are encoded with cross-attention in the encoder block before obtaining sentence embeddings. The model is trained by leveraging a novel form of textual supervision which leverages co-citation contexts to align the sentences of positive examples. Test time behavior ranks documents based on the smallest L2 distance of sentences between documents or the smallest L2 distance between a set of query sentences and a candidate document.
### Training data
The model is trained on pairs of co-cited papers with their sentences aligned by the co-citation context in a contrastive learning setup. The model is trained on 1.2 million computer science paper pairs. In training the model, negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers. For example - the papers in brackets below are all co-cited and each pair of papers would be used as a training pair with the abstracts sentence aligned using the co-citation context. Here the context notes why the cited papers are similar:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for fine-grained document similarity tasks in **computer science** scientific text using multiple vectors per document. The model allows fine grained similarity by establishing sentence-to-sentence similarity between documents. The model is most well suited to an aspect conditional task formulation where a query might consist of sentence in a query document and candidates must be retrieved along this specified sentences. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as document or sentence level classification. Since the training data comes primarily from computer science, performance on other domains may be poorer.
### How to use
This model can be used via the `transformers` library and some additional code to compute contextual sentence vectors.
View example usage in the model github repo: https://github.com/allenai/aspire#tsaspire
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Performance here is reported on CSFCube (computer science/English). This is detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). CSFCube presents a finer-grained query via selected sentences in a query abstract based on which a finer-grained retrieval must be made from candidate abstracts.
In using this sentence level model we rank documents by the minimal L2 distance between the query sentences and a candidate abstract.
### Evaluation results
The released model `aspire-contextualsentence-singlem-compsci` is compared against `allenai/specter`, a bi-encoder baseline and `all-mpnet-base-v2` a strong non-contextual sentence-bert baseline model trained on ~1 billion training examples. `aspire-contextualsentence-singlem-compsci`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-contextualsentence-singlem-compsci` is the single best run among the 3 re-runs.
| | CSFCube aggregated | CSFCube aggregated|
|--------------------------------------------:|:---------:|:-------:|
| | MAP | NDCG%20 |
| `all-mpnet-base-v2` | 34.64 | 54.94 |
| `specter` | 34.23 | 53.28 |
| `aspire-contextualsentence-singlem-compsci`<sup>*</sup> | 40.26 | 60.71 |
| `aspire-contextualsentence-singlem-compsci` | 41.33 | 61.46 |
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-contextualsentence-singlem-biomed`](https://huggingface.co/allenai/aspire-contextualsentence-singlem-biomed): If you wanted to run on biomedical papers and want to use a model trained to match a _single_ sentence between documents.
[`aspire-contextualsentence-multim-biomed`](https://huggingface.co/allenai/aspire-contextualsentence-multim-biomed): If you wanted to run on biomedical papers and want to use a model trained to match _multiple_ sentences between documents.
[`aspire-contextualsentence-multim-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-multim-compsci): If you wanted to run on computer science papers and want to use a model trained to match _multiple_ sentences between documents. |
Raffay/local_speech_processing_project_wav2vec2 | ac6a96dc1f689a0a0c1e9c969041ccd4dfa26e2d | 2022-04-23T19:08:45.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Raffay | null | Raffay/local_speech_processing_project_wav2vec2 | 2 | null | transformers | 25,655 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: local_speech_processing_project_wav2vec2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# local_speech_processing_project_wav2vec2
This model is a fine-tuned version of [kingabzpro/wav2vec2-urdu](https://huggingface.co/kingabzpro/wav2vec2-urdu) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
smeoni/nbme-xlnet-large-cased | f239c316b89bb4ec6acf6fd1c3a06f194149d130 | 2022-04-24T02:42:27.000Z | [
"pytorch",
"tensorboard",
"xlnet",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | smeoni | null | smeoni/nbme-xlnet-large-cased | 2 | null | transformers | 25,656 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: nbme-xlnet-large-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nbme-xlnet-large-cased
This model is a fine-tuned version of [xlnet-large-cased](https://huggingface.co/xlnet-large-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2931 | 1.0 | 1850 | 1.9915 |
| 1.9467 | 2.0 | 3700 | 1.7866 |
| 1.7983 | 3.0 | 5550 | 1.6919 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
zq2186/model_da | edb8aff8f9c5dc81c90f2b9eb1e38a70d3bb8eeb | 2022-04-23T22:45:23.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | zq2186 | null | zq2186/model_da | 2 | null | transformers | 25,657 | Entry not found |
dllllb/poetnet-rut5-stihiru-libru | 5818c87f0bee2a2ad18d8186dfcd55a681ca15e0 | 2022-04-24T00:07:21.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | dllllb | null | dllllb/poetnet-rut5-stihiru-libru | 2 | null | transformers | 25,658 | Entry not found |
BigSalmon/InformalToFormalLincoln40 | c28956e39a1bb11ae103e2ea768309f9a049e719 | 2022-04-24T15:00:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln40 | 2 | null | transformers | 25,659 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln40")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln40")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
``` |
zq2186/model_da_small | be47ed667313fd6128dab7d3b46a291efa0183ac | 2022-04-24T01:38:45.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | zq2186 | null | zq2186/model_da_small | 2 | null | transformers | 25,660 | Entry not found |
artemis13fowl/test_model | cdb3ee8347c30bd59553b30b1af35e77634c6130 | 2022-04-24T07:05:51.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | artemis13fowl | null | artemis13fowl/test_model | 2 | null | transformers | 25,661 | Entry not found |
jackh1995/roberta-base-chinese-extractive-qa-scratch | 68c8757fe839879f37b9386f90faf218cd2a61a0 | 2022-04-24T11:32:12.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | jackh1995 | null | jackh1995/roberta-base-chinese-extractive-qa-scratch | 2 | null | transformers | 25,662 | Entry not found |
singhajeet13/autotrain-NMT-778623908 | 5af5383367aba37d754414190c7983fa46ceb2e5 | 2022-04-24T13:48:26.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"en",
"hi",
"dataset:singhajeet13/autotrain-data-NMT",
"transformers",
"autotrain",
"translation",
"co2_eq_emissions",
"autotrain_compatible"
] | translation | false | singhajeet13 | null | singhajeet13/autotrain-NMT-778623908 | 2 | null | transformers | 25,663 | ---
tags:
- autotrain
- translation
language:
- en
- hi
datasets:
- singhajeet13/autotrain-data-NMT
co2_eq_emissions: 1.0568409665060605
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 778623908
- CO2 Emissions (in grams): 1.0568409665060605
## Validation Metrics
- Loss: 2.4664785861968994
- SacreBLEU: 1.6168
- Gen len: 17.645 |
vikasaeta/distilbert-base-uncased-finetuned-ner | ed05b0e1ea40dcd578ebf246ea3f0388912d2b90 | 2022-04-27T09:38:56.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:few_nerd",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | vikasaeta | null | vikasaeta/distilbert-base-uncased-finetuned-ner | 2 | null | transformers | 25,664 | ---
pipeline_tag: token-classification
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- few_nerd
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: few_nerd
type: few_nerd
args: supervised
metrics:
- name: Precision
type: precision
value: 0.6424480067658478
- name: Recall
type: recall
value: 0.6854236732015421
- name: F1
type: f1
value: 0.6632404008334158
- name: Accuracy
type: accuracy
value: 0.9075199647113962
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the few_nerd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3136
- Precision: 0.6424
- Recall: 0.6854
- F1: 0.6632
- Accuracy: 0.9075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.328 | 1.0 | 8236 | 0.3197 | 0.6274 | 0.6720 | 0.6489 | 0.9041 |
| 0.2776 | 2.0 | 16472 | 0.3111 | 0.6433 | 0.6759 | 0.6592 | 0.9069 |
| 0.241 | 3.0 | 24708 | 0.3136 | 0.6424 | 0.6854 | 0.6632 | 0.9075 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jenspt/bert_classification_v2 | b7a43dcd4c4ab71af6fecd2fef206bd96f559e2e | 2022-04-27T14:09:33.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jenspt | null | jenspt/bert_classification_v2 | 2 | null | transformers | 25,665 | Entry not found |
QuickRead/PP0_rm_v1 | 837ca7585a9e6142f2545c5c9faeedbf1402bd30 | 2022-04-24T15:15:21.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | QuickRead | null | QuickRead/PP0_rm_v1 | 2 | null | transformers | 25,666 | Entry not found |
selen/distilbert-base-uncased-finetuned-cola | b5933656bebc6806d1f74c681188ef19d6213fe6 | 2022-04-24T19:06:54.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | selen | null | selen/distilbert-base-uncased-finetuned-cola | 2 | null | transformers | 25,667 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Raffay/v1_speech_processing_project_wav2vec2 | 1f0accddd68f8f862767a9330d1f1c9d715bf8d6 | 2022-04-26T13:18:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Raffay | null | Raffay/v1_speech_processing_project_wav2vec2 | 2 | null | transformers | 25,668 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: v1_speech_processing_project_wav2vec2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v1_speech_processing_project_wav2vec2
This model is a fine-tuned version of [kingabzpro/wav2vec2-large-xls-r-300m-Urdu](https://huggingface.co/kingabzpro/wav2vec2-large-xls-r-300m-Urdu) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
neal49/distilbert-sst2-notrainer | f7e786668e9a0a9a1427556c2c70bc9131fdd037 | 2022-04-24T21:09:20.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | neal49 | null | neal49/distilbert-sst2-notrainer | 2 | null | transformers | 25,669 | Entry not found |
AntoDono/DialoGPT-Bopy-13k | 077a7c9cb0fc5730414a4d01ec52c761e33a2197 | 2022-04-24T21:27:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | AntoDono | null | AntoDono/DialoGPT-Bopy-13k | 2 | null | transformers | 25,670 | Entry not found |
SophieTr/PP0_rm_v1 | 515700f47ce2c73dc9f8dd0e851edf6360bec828 | 2022-04-24T22:50:31.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SophieTr | null | SophieTr/PP0_rm_v1 | 2 | null | transformers | 25,671 | Entry not found |
rosimeirecosta/bert-base-cased-pt-c-corpus | 4b122718abb272bff45a186c70f5deadd6ff80ae | 2022-04-25T01:36:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | rosimeirecosta | null | rosimeirecosta/bert-base-cased-pt-c-corpus | 2 | null | transformers | 25,672 | ---
license: apache-2.0
---
<b>(BERT base) Language modeling in Portuguese (C-corpus)</b>
<b>bert-base-cased-pt-c-corpus</b> is a Language Model in Portuguese that was finetuned on 24/04/2022 in Google Colab from the model BERTimbau base on the dataset C-Corpus, a dataset with user-generated texts.
|
joniponi/multilabel_inpatient_comments_16labels | 90afacd8323b965f9776ccdc3a427daa6b378fd1 | 2022-04-27T16:20:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | joniponi | null | joniponi/multilabel_inpatient_comments_16labels | 2 | null | transformers | 25,673 | # HCAHPS survey comments multilabel classification
This model is a fine-tuned version of [Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on a dataset of HCAHPS survey comments.
It achieves the following results on the evaluation set:
precision recall f1-score support
medical 0.87 0.81 0.84 83
environmental 0.77 0.91 0.84 93
administration 0.58 0.32 0.41 22
communication 0.85 0.82 0.84 50
condition 0.42 0.52 0.46 29
treatment 0.90 0.78 0.83 68
food 0.92 0.94 0.93 36
clean 0.65 0.83 0.73 18
bathroom 0.64 0.64 0.64 14
discharge 0.83 0.83 0.83 24
wait 0.96 1.00 0.98 24
financial 0.44 1.00 0.62 4
extra_nice 0.20 0.13 0.16 23
rude 1.00 0.64 0.78 11
nurse 0.92 0.98 0.95 110
doctor 0.96 0.84 0.90 57
micro avg 0.81 0.81 0.81 666
macro avg 0.75 0.75 0.73 666
weighted avg 0.82 0.81 0.81 666
samples avg 0.64 0.64 0.62 666
## Model description
The model classifies free-text comments into the following labels
* Medical
* Environmental
* Administration
* Communication
* Condition
* Treatment
* Food
* Clean
* Bathroom
* Discharge
* Wait
* Financial
* Extra_nice
* Rude
* Nurse
* Doctor
## How to use
You can now use the models directly through the transformers library. Check out the [model's page](https://huggingface.co/joniponi/multilabel_inpatient_comments_16labels) for instructions on how to use the models within the Transformers library.
Load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("joniponi/multilabel_inpatient_comments_16labels")
model = AutoModel.from_pretrained("joniponi/multilabel_inpatient_comments_16labels")
```
|
ankitkupadhyay/mt5-small-finetuned-amazon-en-es | b880caca1919c66c117bd9f09f08ac0134740777 | 2022-04-25T06:57:10.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | ankitkupadhyay | null | ankitkupadhyay/mt5-small-finetuned-amazon-en-es | 2 | null | transformers | 25,674 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0255
- Rouge1: 17.469
- Rouge2: 8.5134
- Rougel: 17.1167
- Rougelsum: 17.2481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 8.094 | 1.0 | 1209 | 3.2933 | 12.7976 | 5.1617 | 12.4199 | 12.5113 |
| 3.9263 | 2.0 | 2418 | 3.1487 | 16.2082 | 8.3215 | 15.744 | 15.807 |
| 3.599 | 3.0 | 3627 | 3.0789 | 16.9706 | 8.2425 | 16.3972 | 16.4067 |
| 3.429 | 4.0 | 4836 | 3.0492 | 17.2122 | 8.7398 | 16.7892 | 16.795 |
| 3.3279 | 5.0 | 6045 | 3.0384 | 17.5381 | 8.7438 | 17.0764 | 17.1831 |
| 3.2518 | 6.0 | 7254 | 3.0343 | 17.0966 | 8.5622 | 16.7016 | 16.8022 |
| 3.2084 | 7.0 | 8463 | 3.0255 | 16.7713 | 8.0472 | 16.3159 | 16.4091 |
| 3.1839 | 8.0 | 9672 | 3.0255 | 17.469 | 8.5134 | 17.1167 | 17.2481 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PSW/random_sim_ins_seed1 | 548a6b82786ef17b5366f7fa6ad752567c161c84 | 2022-04-25T07:59:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_ins_seed1 | 2 | null | transformers | 25,675 | Entry not found |
MSLars/t5-small-ace_en_p_pretrained | 6d1b6392f35a8807e22c214287380b18658abfda | 2022-04-25T09:23:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | MSLars | null | MSLars/t5-small-ace_en_p_pretrained | 2 | null | transformers | 25,676 | Entry not found |
PSW/random_sim_ins_seed42 | b76aedbafae7ca90b8d8df2cbc6b1a0be3a1803c | 2022-04-25T09:41:06.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_ins_seed42 | 2 | null | transformers | 25,677 | Entry not found |
spuun/kekbot-beta-2-medium | d2a93e73a557bae1f54ea1ddf292c082fb94d184 | 2022-04-25T18:19:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"conversational",
"license:cc-by-nc-sa-4.0",
"co2_eq_emissions"
] | conversational | false | spuun | null | spuun/kekbot-beta-2-medium | 2 | null | transformers | 25,678 | ---
language:
- en
tags:
- conversational
co2_eq_emissions:
emissions: "940"
source: "mlco2.github.io"
training_type: "fine-tuning"
geographical_location: "West Java, Indonesia"
hardware_used: "1 Tesla P100"
license: cc-by-nc-sa-4.0
widget:
- text: "Hey kekbot! What's up?"
example_title: "Asking what's up"
- text: "Hey kekbot! How r u?"
example_title: "Asking how he is"
---
> THIS MODEL IS IN PUBLIC BETA, PLEASE DO NOT EXPECT ANY FORM OF STABILITY IN ITS CURRENT STATE.
# Art Union server chatbot
Based on a DialoGPT-medium model, fine-tuned to a small subset (115k<= messages) of Art Union's general-chat channel.
### Current issues
(Which hopefully will be fixed in future iterations) Include, but not limited to:
- Limited turns, after ~11 turns output may break for no apparent reason.
- Inconsistent variance, acts like an overfitted model from time to time for no reason whatsoever.
|
PSW/max_sim_ins_seed27 | 2f6a5fd3a8003b0d5910e54cb1e9c34ceffd0a2f | 2022-04-25T11:22:24.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/max_sim_ins_seed27 | 2 | null | transformers | 25,679 | Entry not found |
PSW/max_sim_ins_seed42 | edad38c744f974cb5c888358964790d9004b4a4e | 2022-04-25T12:05:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/max_sim_ins_seed42 | 2 | null | transformers | 25,680 | Entry not found |
0x12/t5-opus_infopankki-en-zh-0 | 68811e67f4151641f143510b7b6b3bc7e59dd1ba | 2022-04-25T13:55:10.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:opus_infopankki",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | 0x12 | null | 0x12/t5-opus_infopankki-en-zh-0 | 2 | null | transformers | 25,681 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_infopankki
model-index:
- name: t5-opus_infopankki-en-zh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-opus_infopankki-en-zh
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_infopankki dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2786 | 1.0 | 1496 | 2.8797 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
drsis/pegasus-samsum | 1db79d3c6a5c23dd872a685c14ae070ecd33fd98 | 2022-04-25T18:16:41.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | drsis | null | drsis/pegasus-samsum | 2 | null | transformers | 25,682 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1284 | 0.01 | 10 | 2.5960 |
| 3.122 | 0.02 | 20 | 2.5579 |
| 3.0196 | 0.03 | 30 | 2.4983 |
| 2.9803 | 0.04 | 40 | 2.4197 |
| 2.8471 | 0.05 | 50 | 2.3258 |
| 2.7692 | 0.07 | 60 | 2.2438 |
| 2.682 | 0.08 | 70 | 2.1608 |
| 2.3648 | 0.09 | 80 | 2.0838 |
| 2.5696 | 0.1 | 90 | 2.0222 |
| 2.3403 | 0.11 | 100 | 1.9713 |
| 2.2036 | 0.12 | 110 | 1.9199 |
| 2.1998 | 0.13 | 120 | 1.8750 |
| 2.3006 | 0.14 | 130 | 1.8382 |
| 2.1182 | 0.15 | 140 | 1.8050 |
| 2.1493 | 0.16 | 150 | 1.7748 |
| 2.0437 | 0.17 | 160 | 1.7494 |
| 1.9236 | 0.18 | 170 | 1.7289 |
| 2.0114 | 0.2 | 180 | 1.7106 |
| 1.9939 | 0.21 | 190 | 1.6906 |
| 1.928 | 0.22 | 200 | 1.6737 |
| 1.9444 | 0.23 | 210 | 1.6603 |
| 1.9071 | 0.24 | 220 | 1.6485 |
| 1.8314 | 0.25 | 230 | 1.6369 |
| 1.8085 | 0.26 | 240 | 1.6277 |
| 1.7493 | 0.27 | 250 | 1.6203 |
| 1.8539 | 0.28 | 260 | 1.6089 |
| 1.7048 | 0.29 | 270 | 1.5999 |
| 1.7486 | 0.3 | 280 | 1.5921 |
| 1.795 | 0.31 | 290 | 1.5842 |
| 1.6613 | 0.33 | 300 | 1.5815 |
| 1.8163 | 0.34 | 310 | 1.5732 |
| 1.6133 | 0.35 | 320 | 1.5621 |
| 1.8 | 0.36 | 330 | 1.5542 |
| 1.7159 | 0.37 | 340 | 1.5506 |
| 1.8081 | 0.38 | 350 | 1.5483 |
| 1.7365 | 0.39 | 360 | 1.5451 |
| 1.7334 | 0.4 | 370 | 1.5405 |
| 1.7329 | 0.41 | 380 | 1.5334 |
| 1.6923 | 0.42 | 390 | 1.5259 |
| 1.6868 | 0.43 | 400 | 1.5227 |
| 1.7033 | 0.45 | 410 | 1.5163 |
| 1.6805 | 0.46 | 420 | 1.5144 |
| 1.6056 | 0.47 | 430 | 1.5126 |
| 1.7317 | 0.48 | 440 | 1.5086 |
| 1.6303 | 0.49 | 450 | 1.5015 |
| 1.7136 | 0.5 | 460 | 1.4943 |
| 1.534 | 0.51 | 470 | 1.4910 |
| 1.6682 | 0.52 | 480 | 1.4917 |
| 1.6234 | 0.53 | 490 | 1.4885 |
| 1.7103 | 0.54 | 500 | 1.4857 |
| 1.7673 | 0.55 | 510 | 1.4800 |
| 1.6631 | 0.56 | 520 | 1.4776 |
| 1.7073 | 0.58 | 530 | 1.4745 |
| 1.6843 | 0.59 | 540 | 1.4698 |
| 1.6849 | 0.6 | 550 | 1.4679 |
| 1.6054 | 0.61 | 560 | 1.4642 |
| 1.6073 | 0.62 | 570 | 1.4629 |
| 1.5896 | 0.63 | 580 | 1.4591 |
| 1.608 | 0.64 | 590 | 1.4580 |
| 1.58 | 0.65 | 600 | 1.4548 |
| 1.5722 | 0.66 | 610 | 1.4548 |
| 1.5529 | 0.67 | 620 | 1.4542 |
| 1.5948 | 0.68 | 630 | 1.4518 |
| 1.5869 | 0.7 | 640 | 1.4489 |
| 1.577 | 0.71 | 650 | 1.4488 |
| 1.6517 | 0.72 | 660 | 1.4477 |
| 1.5955 | 0.73 | 670 | 1.4436 |
| 1.5678 | 0.74 | 680 | 1.4402 |
| 1.6743 | 0.75 | 690 | 1.4384 |
| 1.5791 | 0.76 | 700 | 1.4374 |
| 1.6397 | 0.77 | 710 | 1.4380 |
| 1.5637 | 0.78 | 720 | 1.4363 |
| 1.5849 | 0.79 | 730 | 1.4356 |
| 1.5815 | 0.8 | 740 | 1.4350 |
| 1.5797 | 0.81 | 750 | 1.4362 |
| 1.5551 | 0.83 | 760 | 1.4354 |
| 1.5486 | 0.84 | 770 | 1.4341 |
| 1.5756 | 0.85 | 780 | 1.4320 |
| 1.5326 | 0.86 | 790 | 1.4300 |
| 1.6198 | 0.87 | 800 | 1.4290 |
| 1.5947 | 0.88 | 810 | 1.4288 |
| 1.6326 | 0.89 | 820 | 1.4291 |
| 1.6231 | 0.9 | 830 | 1.4288 |
| 1.597 | 0.91 | 840 | 1.4281 |
| 1.5781 | 0.92 | 850 | 1.4273 |
| 1.6835 | 0.93 | 860 | 1.4260 |
| 1.5373 | 0.94 | 870 | 1.4257 |
| 1.5458 | 0.96 | 880 | 1.4252 |
| 1.4953 | 0.97 | 890 | 1.4252 |
| 1.5299 | 0.98 | 900 | 1.4252 |
| 1.5853 | 0.99 | 910 | 1.4251 |
| 1.5723 | 1.0 | 920 | 1.4251 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 1.18.4
- Tokenizers 0.12.1
|
PSW/half_sim_ins_seed27 | 27af83a6a7fbc89adc2052296ced172a286289d0 | 2022-04-25T15:14:47.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/half_sim_ins_seed27 | 2 | null | transformers | 25,683 | Entry not found |
Lucifermorningstar011/autotrain-final-784824211 | 33fb37a121e4cbf0464189f36818991f9eb7a63c | 2022-04-25T18:49:50.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:Lucifermorningstar011/autotrain-data-final",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | Lucifermorningstar011 | null | Lucifermorningstar011/autotrain-final-784824211 | 2 | null | transformers | 25,684 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Lucifermorningstar011/autotrain-data-final
co2_eq_emissions: 292.55119229577315
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 784824211
- CO2 Emissions (in grams): 292.55119229577315
## Validation Metrics
- Loss: 0.17682738602161407
- Accuracy: 0.9732196168090091
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Lucifermorningstar011/autotrain-final-784824211
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Lucifermorningstar011/autotrain-final-784824211", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Lucifermorningstar011/autotrain-final-784824211", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
PSW/half_sim_ins_seed42 | a5153b5b039ba8e6710ca1d4ce07fbc85ca5a28b | 2022-04-25T15:59:38.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/half_sim_ins_seed42 | 2 | null | transformers | 25,685 | Entry not found |
apkbala107/myowntamilelectramodel | 0c02a18a2cf4626ba4d840cf43988be6a1b269b4 | 2022-04-25T16:29:50.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers",
"license:cc"
] | null | false | apkbala107 | null | apkbala107/myowntamilelectramodel | 2 | null | transformers | 25,686 | ---
license: cc
---
|
Isobutylcyclopentane/swin-tiny-patch4-window7-224-finetuned-eurosat | bd5579336b5a76a88bf68a4080cf422ea2708735 | 2022-04-26T00:50:14.000Z | [
"pytorch",
"tensorboard",
"perceiver",
"image-classification",
"transformers"
] | image-classification | false | Isobutylcyclopentane | null | Isobutylcyclopentane/swin-tiny-patch4-window7-224-finetuned-eurosat | 2 | null | transformers | 25,687 | Entry not found |
Kutay/fine_tuned_squad_aip | ad6f4c38c8c246546b3fbd8d8529b3d7b0cefaef | 2022-04-26T02:36:17.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Kutay | null | Kutay/fine_tuned_squad_aip | 2 | null | transformers | 25,688 | Entry not found |
negfir/bert_uncased_L-10_H-256_A-4wiki103 | 9d997452eff1aa03b5c363583539180ceefc2592 | 2022-04-26T06:17:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-10_H-256_A-4wiki103 | 2 | null | transformers | 25,689 | Entry not found |
Isobutylcyclopentane/2022-075004-finetuned-eurosat | 20bca587be1af261364dcc28c164d461b9278561 | 2022-04-26T08:54:34.000Z | [
"pytorch",
"tensorboard",
"perceiver",
"image-classification",
"transformers"
] | image-classification | false | Isobutylcyclopentane | null | Isobutylcyclopentane/2022-075004-finetuned-eurosat | 2 | null | transformers | 25,690 | Entry not found |
negfir/bert_uncased_L-6_H-256_A-4wiki103 | 32b2e1eec422fa645d7ec523f234268105865383 | 2022-04-26T09:12:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-6_H-256_A-4wiki103 | 2 | null | transformers | 25,691 | Entry not found |
ZZ99/deberta-v3-large-tapt | 1310b84b205a40247227c45e8ec0cc57489b1a00 | 2022-04-29T09:24:11.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | ZZ99 | null | ZZ99/deberta-v3-large-tapt | 2 | null | transformers | 25,692 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-mlm
This model is a fine-tuned version of [/root/autodl-tmp/nbme/tmp/test-mlm/deberta-v3-large-tapt](https://huggingface.co//root/autodl-tmp/nbme/tmp/test-mlm/deberta-v3-large-tapt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3251
- Accuracy: 0.7285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Khalsuu/english-filipino-wav2vec2-l-xls-r-test | cc557e101f0a847f97577a8bea726d79053e7d7d | 2022-05-05T04:50:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:filipino_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Khalsuu | null | Khalsuu/english-filipino-wav2vec2-l-xls-r-test | 2 | null | transformers | 25,693 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- filipino_voice
model-index:
- name: english-filipino-wav2vec2-l-xls-r-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# english-filipino-wav2vec2-l-xls-r-test
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the filipino_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5795
- Wer: 0.3996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0751 | 2.09 | 400 | 2.4744 | 0.9804 |
| 0.7852 | 4.19 | 800 | 0.5836 | 0.5620 |
| 0.3751 | 6.28 | 1200 | 0.4873 | 0.4658 |
| 0.2578 | 8.38 | 1600 | 0.5725 | 0.5289 |
| 0.1897 | 10.47 | 2000 | 0.5342 | 0.4856 |
| 0.1394 | 12.57 | 2400 | 0.5677 | 0.4761 |
| 0.1048 | 14.66 | 2800 | 0.5708 | 0.4415 |
| 0.0848 | 16.75 | 3200 | 0.5908 | 0.4374 |
| 0.0652 | 18.85 | 3600 | 0.5795 | 0.3996 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
maretamasaeva/glue_sst_classifier | a23c5ab8ed4d6060179b4de16dd84f6cb8fe0899 | 2022-04-26T11:43:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | maretamasaeva | null | maretamasaeva/glue_sst_classifier | 2 | null | transformers | 25,694 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
manueltonneau/bert-twitter-en-is-hired | b5e257d6525f31f959a881e238bfd3ea2ab0094e | 2022-04-26T16:00:13.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:2203.09178",
"transformers"
] | text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-en-is-hired | 2 | null | transformers | 25,695 | ---
language: en # <-- my language
widget:
- text: "I was just hired, yay!"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Is Hired (1), else (0)
- country: US
- language: English
- architecture: BERT base
## Model description
This model is a version of `DeepPavlov/bert-base-cased-conversational` finetuned to recognize English tweets where a user mentions that she was hired in the past month. It was trained on English tweets from US-based users. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user was recently hired (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of English tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
manueltonneau/bert-twitter-en-is-unemployed | 34dee3e537b6c819b4e298013698e03a1527ea74 | 2022-04-26T15:49:51.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:2203.09178",
"transformers"
] | text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-en-is-unemployed | 2 | null | transformers | 25,696 | ---
language: en # <-- my language
widget:
- text: "Still unemployed..."
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Is Unemployed (1), else (0)
- country: US
- language: English
- architecture: BERT base
## Model description
This model is a version of `DeepPavlov/bert-base-cased-conversational` finetuned to recognize English tweets where a user mentions that she is unemployed. It was trained on English tweets from US-based users. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user is currently unemployed (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of English tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
0x12/t5-opus_infopankki-en-zh | dba4e9c40484322d9c3c4c6fc125e870e9f869eb | 2022-04-26T15:22:23.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:opus_infopankki",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | 0x12 | null | 0x12/t5-opus_infopankki-en-zh | 2 | null | transformers | 25,697 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_infopankki
model-index:
- name: t5-opus_infopankki-en-zh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-opus_infopankki-en-zh
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_infopankki dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.065 | 1.0 | 1496 | 2.7383 |
| 2.8459 | 2.0 | 2992 | 2.6077 |
| 2.7296 | 3.0 | 4488 | 2.5336 |
| 2.6639 | 4.0 | 5984 | 2.4761 |
| 2.6234 | 5.0 | 7480 | 2.4342 |
| 2.5847 | 6.0 | 8976 | 2.4038 |
| 2.5536 | 7.0 | 10472 | 2.3808 |
| 2.5213 | 8.0 | 11968 | 2.3663 |
| 2.5275 | 9.0 | 13464 | 2.3574 |
| 2.5215 | 10.0 | 14960 | 2.3548 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
manueltonneau/bert-twitter-es-is-hired | bbfb90c7e552546b5a75762ef78fd742c9e26556 | 2022-04-26T16:06:22.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"arxiv:2203.09178",
"transformers"
] | text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-es-is-hired | 2 | null | transformers | 25,698 | ---
language: es # <-- my language
widget:
- text: "Hoy me contrataron!"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Is Hired (1), else (0)
- country: MX
- language: Spanish
- architecture: BERT base
## Model description
This model is a version of `dccuchile/bert-base-spanish-wwm-cased` finetuned to recognize Spanish tweets where a user mentions that she was hired in the past month. It was trained on Spanish tweets from users based in Mexico. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user was recently hired (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of Spanish tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
hbruce11216/distilbert-base-uncased-finetuned-OTTO | aeea1dc425270a76a1042d7a0cd2e552e09eaa71 | 2022-05-03T18:51:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | hbruce11216 | null | hbruce11216/distilbert-base-uncased-finetuned-OTTO | 2 | null | transformers | 25,699 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-OTTO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-OTTO
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7687 | 1.0 | 17 | 3.3507 |
| 3.5069 | 2.0 | 34 | 3.2786 |
| 3.4126 | 3.0 | 51 | 3.2575 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.