modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ceggian/sbert_standard_reddit_mnr | 5338ec9d1db3116f9cf90e6618de79c683af86a7 | 2022-05-11T06:47:13.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ceggian | null | ceggian/sbert_standard_reddit_mnr | 2 | null | sentence-transformers | 25,900 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 39289 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3928,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Diegomejia/bert-ucb-v1 | 7b098c72af132e0a7eb51b893f1d5383246817f8 | 2022-05-11T06:56:50.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Diegomejia | null | Diegomejia/bert-ucb-v1 | 2 | null | transformers | 25,901 | Entry not found |
ceggian/bert_post_trained_reddit_batch64 | ab9911036be52c45cf970480599e16e2dad54e6b | 2022-05-11T07:01:17.000Z | [
"pytorch",
"bert",
"pretraining",
"transformers"
] | null | false | ceggian | null | ceggian/bert_post_trained_reddit_batch64 | 2 | null | transformers | 25,902 | Entry not found |
masakhane/mbart50_zul_en_news | eae22ed568787a87092f130ba2cad84c63614d94 | 2022-05-12T13:06:17.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mbart50_zul_en_news | 2 | null | transformers | 25,903 | ---
license: afl-3.0
---
|
masakhane/mbart50_en_zul_news | 71231a216f6a9a760bb5bfc83debb805d24100d6 | 2022-05-12T13:06:20.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mbart50_en_zul_news | 2 | null | transformers | 25,904 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_en_zul_rel_news_ft | 91458b077bd212c1f9a866f2c0964e45c2b8a5a5 | 2022-05-12T13:36:16.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_zul_rel_news_ft | 2 | null | transformers | 25,905 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_twi_en_rel | 372600595b1a7155e5b46176aa677a8bf229c966 | 2022-05-12T12:40:17.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_twi_en_rel | 2 | null | transformers | 25,906 | ---
license: afl-3.0
---
|
PSW/min2_sim_swap_seed42 | e5ef8e17dd529fa651c10fe866649709d45a79f4 | 2022-05-12T03:27:02.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/min2_sim_swap_seed42 | 2 | null | transformers | 25,907 | Entry not found |
PSW/max2_sim_swap_seed27 | 1c6941f59acf268a7a0a8056952bf550403609f4 | 2022-05-12T04:54:50.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/max2_sim_swap_seed27 | 2 | null | transformers | 25,908 | Entry not found |
lucaordronneau/finbert-finetuned-FG-SINGLE_SENTENCE-NEWS-WEIGHTED | 8739777170fe6a3170abfc5d869d732f7818cf99 | 2022-05-11T13:26:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | lucaordronneau | null | lucaordronneau/finbert-finetuned-FG-SINGLE_SENTENCE-NEWS-WEIGHTED | 2 | null | transformers | 25,909 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finbert-finetuned-FG-SINGLE_SENTENCE-NEWS-WEIGHTED
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finbert-finetuned-FG-SINGLE_SENTENCE-NEWS-WEIGHTED
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2741
- Accuracy: 0.7475
- F1: 0.7253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 249 | 0.9150 | 0.7346 | 0.6484 |
| No log | 2.0 | 498 | 0.8837 | 0.6210 | 0.6317 |
| 1.033 | 3.0 | 747 | 0.8460 | 0.6485 | 0.6666 |
| 1.033 | 4.0 | 996 | 1.0089 | 0.6831 | 0.6909 |
| 0.5642 | 5.0 | 1245 | 1.2507 | 0.7352 | 0.7152 |
| 0.5642 | 6.0 | 1494 | 1.3241 | 0.7129 | 0.7042 |
| 0.2078 | 7.0 | 1743 | 1.5163 | 0.7528 | 0.7230 |
| 0.2078 | 8.0 | 1992 | 1.5818 | 0.7352 | 0.7236 |
| 0.1108 | 9.0 | 2241 | 1.7930 | 0.7012 | 0.7046 |
| 0.1108 | 10.0 | 2490 | 1.8262 | 0.7305 | 0.7211 |
| 0.07 | 11.0 | 2739 | 2.0415 | 0.7440 | 0.7192 |
| 0.07 | 12.0 | 2988 | 2.1260 | 0.7563 | 0.7230 |
| 0.0392 | 13.0 | 3237 | 2.1502 | 0.7528 | 0.7323 |
| 0.0392 | 14.0 | 3486 | 2.2117 | 0.7516 | 0.7270 |
| 0.0174 | 15.0 | 3735 | 2.2657 | 0.7405 | 0.7236 |
| 0.0174 | 16.0 | 3984 | 2.2741 | 0.7475 | 0.7253 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
PSW/max2_sim_swap_seed42 | cff847bdf5a1999e90eb80378a711a02f07fbe01 | 2022-05-12T05:38:52.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/max2_sim_swap_seed42 | 2 | null | transformers | 25,910 | Entry not found |
PSW/low_resource_percent1_min2swap_seed1 | 53e619f633d4ba8a086d9cdf9fc7ad06cb2c41b5 | 2022-05-12T05:50:59.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent1_min2swap_seed1 | 2 | null | transformers | 25,911 | Entry not found |
lilitket/20220511-173138 | f1c8de085ad87ac1558d4ecb41a21179dbbe5094 | 2022-05-13T00:32:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220511-173138 | 2 | null | transformers | 25,912 | Entry not found |
PSW/low_resource_percent1_min2swap_seed27 | 4bf7f33144f02e39da14d3b682248789da2aed26 | 2022-05-12T06:02:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent1_min2swap_seed27 | 2 | null | transformers | 25,913 | Entry not found |
bansals10/wav2vec2-large-xls-r-300m-turkish-colab | 3f0411949980de537d51843610e1be3a94cc4337 | 2022-05-12T15:25:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | bansals10 | null | bansals10/wav2vec2-large-xls-r-300m-turkish-colab | 2 | null | transformers | 25,914 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
PSW/low_resource_percent1_max2swap_seed42 | 2f8e97d1050cf810eaa38ea5f0f4731bb4ff3ef0 | 2022-05-12T06:54:13.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent1_max2swap_seed42 | 2 | null | transformers | 25,915 | Entry not found |
aware-ai/wav2vec2-xls-r-1b-german-augmented | fd64302e845d9348ea74dfb1af4447341cee5f96 | 2022-05-15T01:02:32.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | aware-ai | null | aware-ai/wav2vec2-xls-r-1b-german-augmented | 2 | null | transformers | 25,916 | Entry not found |
PSW/low_resource_percent10_min2swap_seed42 | 4cfc6caa2a68db8875e5895d32a1d448a499783b | 2022-05-12T07:43:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent10_min2swap_seed42 | 2 | null | transformers | 25,917 | Entry not found |
ceggian/sbert_pt_reddit_softmax_512 | 5d0da98baad0dd9d457ea5501ffc2e72b4623798 | 2022-05-11T16:59:38.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ceggian | null | ceggian/sbert_pt_reddit_softmax_512 | 2 | null | sentence-transformers | 25,918 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 117759 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11775,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ceggian/sbert_pt_reddit_mnr_128 | 0115e8596e7c939ecc7c4361b1e66e879e63c892 | 2022-05-11T19:05:37.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ceggian | null | ceggian/sbert_pt_reddit_mnr_128 | 2 | null | sentence-transformers | 25,919 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 39289 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3928,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ceggian/sbert_pt_reddit_mnr_32 | 546e637f458f41aa95da156c399c5f68bf14072e | 2022-05-11T21:33:56.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ceggian | null | ceggian/sbert_pt_reddit_mnr_32 | 2 | null | sentence-transformers | 25,920 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 39289 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3928,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Dizzykong/gpt2-large-quests | da925033fee27cb4ec151ec94a747fbe3398a75c | 2022-05-12T00:51:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Dizzykong | null | Dizzykong/gpt2-large-quests | 2 | null | transformers | 25,921 | Entry not found |
ceggian/sbert_pt_reddit_softmax_32 | c77e3b13268baff060670854c4d696d8a8bb2906 | 2022-05-12T05:28:54.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ceggian | null | ceggian/sbert_pt_reddit_softmax_32 | 2 | null | sentence-transformers | 25,922 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 117759 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11775,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
withU/kogpt2-emotion-chatbot | 2dd5ffb2b0a5860f184afc45c222537f97187f4a | 2022-05-16T07:58:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | withU | null | withU/kogpt2-emotion-chatbot | 2 | null | transformers | 25,923 | # KoGPT2-emotion-chatbot
kogpt2 on hugging face Transformers for Psychological Counseling
- [full project link](https://github.com/jiminAn/Capstone_2022)
## how to use
```
from transformers import GPT2LMHeadModel, PreTrainedTokenizerFast
model = GPT2LMHeadModel.from_pretrained("withU/kogpt2-emotion-chatbot")
tokenizer = PreTrainedTokenizerFast.from_pretrained("withU/kogpt2-emotion-chatbot")
input_ids = tokenizer.encode("안녕", add_special_tokens=False, return_tensors="pt")
output_sequences = model.generate(input_ids=input_ids, do_sample=True, max_length=80, num_return_sequences=4)
for generated_sequence in output_sequences:
generated_sequence = generated_sequence.tolist()
print("GENERATED SEQUENCE : {0}".format(tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True)))
```
## dataset finetuned on
- [wellness dataset](https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-006)
- [emotion corpus of conversations](https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-010)
- [chatbot data](https://jeongukjae.github.io/tfds-korean/datasets/korean_chatbot_qa_data.html)
## references
- [WelllnessConversation-LanguageModel](https://github.com/nawnoes/WellnessConversation-LanguageModel)
- [KoGPT2: SKT-AI](https://github.com/SKT-AI/KoGPT2) |
ali-issa/FYP_ARABIZI | 9ea119f7c4b3010b0e98761bd2cee160424d6744 | 2022-05-12T10:47:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ali-issa | null | ali-issa/FYP_ARABIZI | 2 | null | transformers | 25,924 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-Arabizi-gpu-colab-similar-to-german-param
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-Arabizi-gpu-colab-similar-to-german-param
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5609
- Wer: 0.4042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.6416 | 2.83 | 400 | 2.8983 | 1.0 |
| 1.4951 | 5.67 | 800 | 0.6272 | 0.6097 |
| 0.6419 | 8.51 | 1200 | 0.5491 | 0.5069 |
| 0.4767 | 11.35 | 1600 | 0.5152 | 0.4553 |
| 0.3899 | 14.18 | 2000 | 0.5436 | 0.4475 |
| 0.3342 | 17.02 | 2400 | 0.5400 | 0.4431 |
| 0.2982 | 19.85 | 2800 | 0.5599 | 0.4248 |
| 0.2738 | 22.69 | 3200 | 0.5401 | 0.4103 |
| 0.2563 | 25.53 | 3600 | 0.5710 | 0.4198 |
| 0.2443 | 28.37 | 4000 | 0.5609 | 0.4042 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Sumedha/distilbert-base-uncased-finetuned-imdb | b85b29fb58325e47035b1b2c1eba594e283db3c2 | 2022-05-12T11:10:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Sumedha | null | Sumedha/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 25,925 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.707 | 1.0 | 157 | 2.4884 |
| 2.5761 | 2.0 | 314 | 2.4230 |
| 2.5255 | 3.0 | 471 | 2.4356 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.0
- Tokenizers 0.11.0
|
creynier/wav2vec2-base-swbd-turn-eos-long_short2s_utt_removed_3percent | 20a8686409ba1150e76da680af3255494de0eb18 | 2022-05-12T10:24:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-eos-long_short2s_utt_removed_3percent | 2 | null | transformers | 25,926 | Entry not found |
Fawreez/DialoGPT-small-raptor | 70ba01b8ea315bbf4858cfd7a59d32bf25339f40 | 2022-05-12T12:38:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Fawreez | null | Fawreez/DialoGPT-small-raptor | 2 | null | transformers | 25,927 | ---
tags:
- conversational
---
# Fawreez DialoGPT Model |
creynier/wav2vec2-base-swbd-turn-eos-long_short1-8s_utt_removed_5percent | 9e9eb99c247a331166216a2b1f1b0d24c7666381 | 2022-05-13T06:27:55.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-eos-long_short1-8s_utt_removed_5percent | 2 | null | transformers | 25,928 | Entry not found |
gonzpen/gbert-base-ft-edu-redux | c18078334f82b40f5c60bf7a797b17182b05131b | 2022-05-13T10:42:51.000Z | [
"pytorch",
"bert",
"text-classification",
"de",
"transformers",
"license:mit"
] | text-classification | false | gonzpen | null | gonzpen/gbert-base-ft-edu-redux | 2 | null | transformers | 25,929 | ---
language: de
license: mit
---
# German BERT base fine-tuned to predict educational requirements
This is a fine-tuned version of the German BERT base language model [deepset/gbert-base](https://huggingface.co/deepset/gbert-base). The multilabel task this model was trained on was to predict education requirements from job ad texts. The dataset used for training is not available to the public. The 7 labels in the task are (in the classification head order):
- `'Bachelor'`
- `'Berufsausbildung'`
- `'Doktorat oder äquivalent'`
- `'Höhere Berufsausbildung'`
- `'Master'`
- `'Sonstiges'`
- `'keine Ausbildungserfordernisse'`
The number of representatives of these labels in each of the splits (train/test/val) of the dataset is summarized in the following table:
| Label name | All data | Training | Validation | Test |
|------------|----------|----------|------------|------|
| Bachelor | 521 | 365 | 52 | 104 |
| Berufsausbildung | 1854 | 1298 | 185 | 371 |
| Doktorat oder äquivalent | 38 | 27 | 4 | 7 |
| Höhere Berufsausbildung | 564 | 395 | 56 | 113 |
| Master | 245 | 171 | 25 | 49 |
| Sonstiges | 819 | 573 | 82 | 164 |
| keine Ausbildungserfordernisse | 176 | 123 | 18 | 35 |
## Performance
Training consisted of [minimizing the binary cross-entropy (BCE)](https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_minimization) loss between the model's predictions and the actual labels in the training set. During training, a weighted version of the [label ranking average precision (LRAP)](https://scikit-learn.org/stable/modules/model_evaluation.html#label-ranking-average-precision) was tracked for the testing set. LRAP measures what fraction of higher-ranked labels produced by the model were true labels. To account for the label imbalance, the rankings were weighted so that improperly ranked rare labels are penalized more than their more frequent counterparts. After training was complete, the model with highest weighted LRAP was saved.
```
LRAP: 0.93
```
# See also:
- [deepset/gbert-base](https://huggingface.co/deepset/gbert-base)
- [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
- [gonzpen/gbert-large-ft-edu-redux](https://huggingface.co/gonzpen/gbert-large-ft-edu-redux)
## Authors
Rodrigo C. G. Pena: `rodrigocgp [at] gmail.com`
|
aajrami/bert-ascii-base | ac0934a8496f5820a46c7568c1d29e13569b9da2 | 2022-06-01T11:51:29.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2203.10415",
"transformers",
"bert",
"license:cc-by-4.0"
] | feature-extraction | false | aajrami | null | aajrami/bert-ascii-base | 2 | null | transformers | 25,930 | ---
tags:
- bert
license: cc-by-4.0
---
## bert-ascii-base
is a BERT base Language Model pre-trained by predicting the summation of the **ASCII** code values of the characters in a masked token as a pre-training objective. For more details about the pre-training objective and the pre-training hyperparameters, please refer to [How does the pre-training objective affect what large language models learn about linguistic properties?](https://arxiv.org/abs/2203.10415)
## License
CC BY 4.0
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{alajrami2022does,
title={How does the pre-training objective affect what large language models learn about linguistic properties?},
author={Alajrami, Ahmed and Aletras, Nikolaos},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
pages={131--147},
year={2022}
}
``` |
ceggian/sbert_pt_reddit_softmax_64 | 8139435724180750ae209cc86e62e362c2d275d6 | 2022-05-12T20:23:45.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ceggian | null | ceggian/sbert_pt_reddit_softmax_64 | 2 | null | sentence-transformers | 25,931 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 117759 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11775,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
danieleV9H/hubert-base-timit-demo-google-colab-ft30ep_v5 | b3aadc14f39e5b1958e88ec049205d322d61e018 | 2022-05-14T10:32:52.000Z | [
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | danieleV9H | null | danieleV9H/hubert-base-timit-demo-google-colab-ft30ep_v5 | 2 | null | transformers | 25,932 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: hubert-base-timit-demo-google-colab-ft30ep_v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-timit-demo-google-colab-ft30ep_v5
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the timit-asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4763
- Wer: 0.3322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.9596 | 0.87 | 500 | 3.1237 | 1.0 |
| 2.5388 | 1.73 | 1000 | 1.1689 | 0.9184 |
| 1.0448 | 2.6 | 1500 | 0.6106 | 0.5878 |
| 0.6793 | 3.46 | 2000 | 0.4912 | 0.5200 |
| 0.5234 | 4.33 | 2500 | 0.4529 | 0.4798 |
| 0.4368 | 5.19 | 3000 | 0.4239 | 0.4543 |
| 0.3839 | 6.06 | 3500 | 0.4326 | 0.4339 |
| 0.3315 | 6.92 | 4000 | 0.4265 | 0.4173 |
| 0.2878 | 7.79 | 4500 | 0.4304 | 0.4068 |
| 0.25 | 8.65 | 5000 | 0.4130 | 0.3940 |
| 0.242 | 9.52 | 5500 | 0.4310 | 0.3938 |
| 0.2182 | 10.38 | 6000 | 0.4204 | 0.3843 |
| 0.2063 | 11.25 | 6500 | 0.4449 | 0.3816 |
| 0.2099 | 12.11 | 7000 | 0.4016 | 0.3681 |
| 0.1795 | 12.98 | 7500 | 0.4027 | 0.3647 |
| 0.1604 | 13.84 | 8000 | 0.4294 | 0.3664 |
| 0.1683 | 14.71 | 8500 | 0.4412 | 0.3661 |
| 0.1452 | 15.57 | 9000 | 0.4484 | 0.3588 |
| 0.1491 | 16.44 | 9500 | 0.4508 | 0.3515 |
| 0.1388 | 17.3 | 10000 | 0.4240 | 0.3518 |
| 0.1399 | 18.17 | 10500 | 0.4605 | 0.3513 |
| 0.1265 | 19.03 | 11000 | 0.4412 | 0.3485 |
| 0.1137 | 19.9 | 11500 | 0.4520 | 0.3467 |
| 0.106 | 20.76 | 12000 | 0.4873 | 0.3426 |
| 0.1243 | 21.63 | 12500 | 0.4456 | 0.3396 |
| 0.1055 | 22.49 | 13000 | 0.4819 | 0.3406 |
| 0.1124 | 23.36 | 13500 | 0.4613 | 0.3391 |
| 0.1064 | 24.22 | 14000 | 0.4842 | 0.3430 |
| 0.0875 | 25.09 | 14500 | 0.4661 | 0.3348 |
| 0.086 | 25.95 | 15000 | 0.4724 | 0.3371 |
| 0.0842 | 26.82 | 15500 | 0.4982 | 0.3381 |
| 0.0834 | 27.68 | 16000 | 0.4856 | 0.3337 |
| 0.0918 | 28.55 | 16500 | 0.4783 | 0.3344 |
| 0.0773 | 29.41 | 17000 | 0.4763 | 0.3322 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
luckydog/bert-base-chinese-finetuned-mosei1 | a24c16e2937efb96fdf9c5ceeefdbb52a12ce431 | 2022-05-13T02:48:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | luckydog | null | luckydog/bert-base-chinese-finetuned-mosei1 | 2 | null | transformers | 25,933 | Entry not found |
misawann/bert-base-jaquad-ffn2150-head-10 | addd80159ef98a9ce17b3507fe621f15182d993a | 2022-05-13T07:11:54.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | misawann | null | misawann/bert-base-jaquad-ffn2150-head-10 | 2 | null | transformers | 25,934 | ---
widget:
- text: "ドクウツボはインド洋とどの海域の熱帯域に分布しますか?"
context: "ドクウツボ(毒鱓)Gymnothoraxjavanicus(Bleeker,1859)は体長3メートルの記録がある大型種で、鰓孔が黒いことで近縁種と区別できる。 インド洋と太平洋の熱帯域に広く分布し、日本では琉球列島で見られる。 "
---
## モデル詳細
- [cl-tohoku/bert-base-japanese](https://huggingface.co/cl-tohoku/bert-base-japanese) を JaQuAD で fine-tuning した [SkelterLabsInc/bert-base-japanese-jaquad](https://huggingface.co/SkelterLabsInc/bert-base-japanese-jaquad) に対して [TextPruner](https://github.com/airaria/TextPruner) を使って
Transformer Pruning したモデル。
- 枝刈りには,JaQuAD の訓練データのうち1024件を使用し,10イテレーションで実施。
- FFNのサイズを30%,attention head の数を 10 % 削減 (ffn: 3072, head: 12 -> ffn: 2150, head: 10)。
- ※ [JaQuAD の実験コード](https://github.com/SkelterLabsInc/JaQuAD/blob/main/JaQuAD.ipynb)と同じ前処理をした上で使用してください。
- ※ 上記の理由で, hf hub の Hosted inference API 上では適切な予測が出力されません。
## JaQuAD の validation データでの性能
- フルモデル
- F1 score: 0.779
- Exact Match: 0.614
- 枝刈り後のモデル
- F1 score: 0.756
- Exact Match: 0.587 |
lucifermorninstar011/autotrain-luicfer_company-861827409 | 28636681b7ce6d65a9e3e282d2a8bfdde5f67858 | 2022-05-13T09:20:43.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:lucifermorninstar011/autotrain-data-luicfer_company",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | lucifermorninstar011 | null | lucifermorninstar011/autotrain-luicfer_company-861827409 | 2 | null | transformers | 25,935 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- lucifermorninstar011/autotrain-data-luicfer_company
co2_eq_emissions: 159.62610219360334
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 861827409
- CO2 Emissions (in grams): 159.62610219360334
## Validation Metrics
- Loss: 0.007599336095154285
- Accuracy: 0.9905338980217686
- Precision: 0.9557812806826499
- Recall: 0.9549459565512075
- F1: 0.9553634360250886
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lucifermorninstar011/autotrain-luicfer_company-861827409
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("lucifermorninstar011/autotrain-luicfer_company-861827409", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lucifermorninstar011/autotrain-luicfer_company-861827409", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
PSW/cnndm_0.1percent_maxsimdel_seed1 | af110b7d6aa74fdb78a805043094064a361c4830 | 2022-05-15T11:50:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_maxsimdel_seed1 | 2 | null | transformers | 25,936 | Entry not found |
PSW/cnndm_0.1percent_randomsimdel_seed1 | fb4657bb2e0fadfe854f288d4674d14bc4fb1952 | 2022-05-15T15:11:22.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_randomsimdel_seed1 | 2 | null | transformers | 25,937 | Entry not found |
Ninh/xlm-roberta-base-finetuned-panx-de | d8d62c04b5284d29bdd96e9b5dfc5dcc1c088ebc | 2022-05-13T09:48:15.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | Ninh | null | Ninh/xlm-roberta-base-finetuned-panx-de | 2 | null | transformers | 25,938 | ---
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.861182081417135
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1395
- F1: 0.8612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1663 | 0.8258 |
| 0.1311 | 2.0 | 1050 | 0.1401 | 0.8496 |
| 0.0811 | 3.0 | 1575 | 0.1395 | 0.8612 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
PSW/cnndm_0.1percent_minsimins_seed1 | 63d64432ac919e2e28d29d3b7ce145c558b0882c | 2022-05-15T18:31:33.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_minsimins_seed1 | 2 | null | transformers | 25,939 | Entry not found |
scasutt/wav2vec2-large-xlsr-53_full_train_full_train | 0db2101445fadc5a40e5914ce0aaa1ae32d96f8b | 2022-05-16T13:22:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-large-xlsr-53_full_train_full_train | 2 | null | transformers | 25,940 | ---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_full_train_full_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_full_train_full_train
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8369
- Wer: 0.5052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.533 | 1.35 | 1000 | 0.3547 | 0.3483 |
| 0.4531 | 2.69 | 2000 | 0.8369 | 0.5052 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
manirai91/xlm-roberta-imdb | de8ce6543965ac21c681aec20288ad2fc198d870 | 2022-05-13T15:28:58.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | manirai91 | null | manirai91/xlm-roberta-imdb | 2 | null | transformers | 25,941 | Entry not found |
versae/bertin-roberta-base-spanish-finetuned-recores | 596e4f404b010e0e105f5ef8f5329666c23422d8 | 2022-05-13T18:00:16.000Z | [
"pytorch",
"tensorboard",
"roberta",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index"
] | multiple-choice | false | versae | null | versae/bertin-roberta-base-spanish-finetuned-recores | 2 | null | transformers | 25,942 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bertin-roberta-base-spanish-finetuned-recores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertin-roberta-base-spanish-finetuned-recores
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2985
- Accuracy: 0.3581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6065 | 1.0 | 1047 | 1.5944 | 0.2948 |
| 1.4913 | 2.0 | 2094 | 2.4456 | 0.3581 |
| 0.7893 | 3.0 | 3141 | 3.4247 | 0.3691 |
| 0.2117 | 4.0 | 4188 | 3.9878 | 0.3526 |
| 0.0509 | 5.0 | 5235 | 4.2985 | 0.3581 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
manirai91/xlm-roberta-conll2003 | d17a673f5979ed84061755cb06a7a802811013ab | 2022-05-13T15:45:14.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | manirai91 | null | manirai91/xlm-roberta-conll2003 | 2 | null | transformers | 25,943 | Entry not found |
PSW/cnndm_0.1percent_randomswap_seed1 | 31a2e22c62791c4b6c8ab7172fc4f884726bbcf6 | 2022-05-16T14:28:58.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_randomswap_seed1 | 2 | null | transformers | 25,944 | Entry not found |
nepp1d0/TAPE-finetuned-viralProteins | f8430140a9d415180264cea2328cda6e640afc77 | 2022-05-13T21:27:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | nepp1d0 | null | nepp1d0/TAPE-finetuned-viralProteins | 2 | null | transformers | 25,945 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: TAPE-finetuned-viralProteins
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TAPE-finetuned-viralProteins
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9033
- Accuracy: 0.87
- F1: 0.8555
- Precision: 0.8475
- Recall: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8845 | 1.0 | 5000 | 0.8302 | 0.85 | 0.8060 | 0.7779 | 0.85 |
| 0.8189 | 2.0 | 10000 | 0.6062 | 0.86 | 0.8255 | 0.8115 | 0.86 |
| 0.806 | 3.0 | 15000 | 0.8546 | 0.85 | 0.8095 | 0.7840 | 0.85 |
| 0.6971 | 4.0 | 20000 | 0.7660 | 0.86 | 0.8228 | 0.8027 | 0.86 |
| 0.6269 | 5.0 | 25000 | 0.7787 | 0.85 | 0.8343 | 0.8226 | 0.85 |
| 0.5771 | 6.0 | 30000 | 0.7965 | 0.855 | 0.8402 | 0.8290 | 0.855 |
| 0.5433 | 7.0 | 35000 | 0.7864 | 0.875 | 0.8573 | 0.8473 | 0.875 |
| 0.5183 | 8.0 | 40000 | 0.8292 | 0.87 | 0.8521 | 0.8425 | 0.87 |
| 0.4396 | 9.0 | 45000 | 0.8838 | 0.875 | 0.8566 | 0.8483 | 0.875 |
| 0.4019 | 10.0 | 50000 | 0.9033 | 0.87 | 0.8555 | 0.8475 | 0.87 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
PSW/cnndm_0.5percent_randomsimins_seed1 | c0b34de44833d6cf0bb245b10c371fcc79d7ecde | 2022-05-17T11:37:33.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_randomsimins_seed1 | 2 | null | transformers | 25,946 | Entry not found |
AnonymousSub/rule_based_roberta_kldiv_hier_triplet_epochs_1_shard_1 | 68f7c65fd37b26f120658f572099e7d5d0fd713b | 2022-05-14T01:46:55.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_kldiv_hier_triplet_epochs_1_shard_1 | 2 | null | transformers | 25,947 | Entry not found |
PSW/cnndm_0.5percent_minmaxswap_seed1 | 847884f91e95e0e97af3ce1494c64d44b84eb5dd | 2022-05-17T15:30:14.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_minmaxswap_seed1 | 2 | null | transformers | 25,948 | Entry not found |
PSW/cnndm_0.5percent_min2swap_seed1 | 79bdf0976018d6bc23bc48481818a7a9ae97d679 | 2022-05-17T19:02:38.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_min2swap_seed1 | 2 | null | transformers | 25,949 | Entry not found |
AnonymousSub/rule_based_roberta_kldiv_hier_triplet_epochs_1_shard_1_squad2.0 | 0d3fd7976e68f5aff5822efe53d2677ecfa9522d | 2022-05-14T03:54:30.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_kldiv_hier_triplet_epochs_1_shard_1_squad2.0 | 2 | null | transformers | 25,950 | Entry not found |
PSW/cnndm_0.5percent_max2swap_seed1 | a60a67d914f3438eacabfde9f0f79fa2dde88edb | 2022-05-17T22:34:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_max2swap_seed1 | 2 | null | transformers | 25,951 | Entry not found |
PSW/cnndm_0.5percent_randomswap_seed1 | d9b040920ac754bfd4916f6d24bd4e6ced44fe66 | 2022-05-18T02:06:27.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_randomswap_seed1 | 2 | null | transformers | 25,952 | Entry not found |
CEBaB/bert-base-uncased.CEBaB-challenge.sa.2-class.exclusive.seed_66 | 0c282fb111f50da3cf906a9fa5ed54a332c538b8 | 2022-05-14T17:30:13.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB-challenge.sa.2-class.exclusive.seed_66 | 2 | null | transformers | 25,953 | Entry not found |
CEBaB/bert-base-uncased.CEBaB-challenge.sa.2-class.inclusive.seed_99 | 5e74bc91486fff3d6cbe73b1941b6ca469b9dd68 | 2022-05-14T18:19:28.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB-challenge.sa.2-class.inclusive.seed_99 | 2 | null | transformers | 25,954 | Entry not found |
PSW/cnndm_10percent_minsimins_seed1 | 63707f42c5b67374982889654a5ce4aa65535cab | 2022-05-14T18:35:26.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_10percent_minsimins_seed1 | 2 | null | transformers | 25,955 | Entry not found |
claytonsamples/xlm-roberta-base-finetuned-panx-de | cc082399ce177aa3d817656b9c79378e851fccda | 2022-05-14T19:19:42.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | claytonsamples | null | claytonsamples/xlm-roberta-base-finetuned-panx-de | 2 | null | transformers | 25,956 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
subhasisj/zh-kd-XLM-minilmv2-4 | 6a1322ad893c33f0c043b28838d3755a9eec1d15 | 2022-05-16T12:40:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | subhasisj | null | subhasisj/zh-kd-XLM-minilmv2-4 | 2 | null | transformers | 25,957 | Multilingual MiniLMv2 fine-tuned using Knowledge Distillation with a XLM Roberta Base Teacher Model on ZH Language |
anas-awadalla/roberta-large-few-shot-k-16-finetuned-squad-seed-0 | 41db820e8d4119f5ab2d55ded4f505284611441b | 2022-05-14T19:22:38.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-large-few-shot-k-16-finetuned-squad-seed-0 | 2 | null | transformers | 25,958 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-16-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-16-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-16-finetuned-squad-seed-4 | 70280671ea96fb624e17bf9207fe99dcf5413af6 | 2022-05-14T19:42:04.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-large-few-shot-k-16-finetuned-squad-seed-4 | 2 | null | transformers | 25,959 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-16-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-16-finetuned-squad-seed-4
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/splinter-large-few-shot-k-32-finetuned-squad-seed-4 | f12b756e58d4421bb8778d32f45191344100d923 | 2022-05-14T20:18:03.000Z | [
"pytorch",
"splinter",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/splinter-large-few-shot-k-32-finetuned-squad-seed-4 | 2 | null | transformers | 25,960 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-32-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-32-finetuned-squad-seed-4
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-64-finetuned-squad-seed-0 | 65a51d0026c4c04ac01d19be7ba5995ffe292a3b | 2022-05-14T20:24:34.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-large-few-shot-k-64-finetuned-squad-seed-0 | 2 | null | transformers | 25,961 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-64-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-64-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/roberta-large-few-shot-k-128-finetuned-squad-seed-0 | a6178b60fc2577f1af1cfd8eb1f6976ece0f8795 | 2022-05-14T20:58:03.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-large-few-shot-k-128-finetuned-squad-seed-0 | 2 | null | transformers | 25,962 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-128-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-128-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anas-awadalla/splinter-large-few-shot-k-128-finetuned-squad-seed-4 | 50b1bb2aeb0178828000ba58c54ba284f2445774 | 2022-05-14T21:28:38.000Z | [
"pytorch",
"splinter",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/splinter-large-few-shot-k-128-finetuned-squad-seed-4 | 2 | null | transformers | 25,963 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: splinter-large-few-shot-k-128-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-128-finetuned-squad-seed-4
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
PSW/cnndm_10percent_maxsimins_seed1 | 0195468bb98f032480ab43fdcb0eadd5e05fb8a0 | 2022-05-14T21:44:17.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_10percent_maxsimins_seed1 | 2 | null | transformers | 25,964 | Entry not found |
anas-awadalla/roberta-large-few-shot-k-256-finetuned-squad-seed-4 | aa548cae10a285980532b0fc1024684cf72ac3a4 | 2022-05-14T22:02:44.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-large-few-shot-k-256-finetuned-squad-seed-4 | 2 | null | transformers | 25,965 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-256-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-256-finetuned-squad-seed-4
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ruselkomp/xlm-roberta | a932fab91b30bc1396de4a4adeeedf263127ad58 | 2022-05-15T07:26:51.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/xlm-roberta | 2 | null | transformers | 25,966 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta
This model is a fine-tuned version of [AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru](https://huggingface.co/AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0083 | 1.0 | 15104 | 0.9420 |
| 0.8093 | 2.0 | 30208 | 0.9264 |
| 0.5576 | 3.0 | 45312 | 1.1842 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2.dev0
- Tokenizers 0.12.1
|
anas-awadalla/roberta-large-few-shot-k-1024-finetuned-squad-seed-2 | f3bd05d446c3c87f74574bebc2c190325bc460b5 | 2022-05-14T23:31:40.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/roberta-large-few-shot-k-1024-finetuned-squad-seed-2 | 2 | null | transformers | 25,967 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-few-shot-k-1024-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-1024-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
fatirali/DialoGPT-medium-harrypotter | bfc30b8647fd1727747d68cbd65bde4429e51050 | 2022-05-16T06:46:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | fatirali | null | fatirali/DialoGPT-medium-harrypotter | 2 | null | transformers | 25,968 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
PSW/cnndm_0.1percent_minsimdel_seed27 | d3e2b6a0767898e3ce11cfc14cb07700f6b6218f | 2022-05-15T09:38:43.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_minsimdel_seed27 | 2 | null | transformers | 25,969 | Entry not found |
PSW/cnndm_0.1percent_minsimdel_seed42 | f47481deded24ab6ee4e3701af368a317a5e064e | 2022-05-15T10:47:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_minsimdel_seed42 | 2 | null | transformers | 25,970 | Entry not found |
PSW/cnndm_0.1percent_maxsimdel_seed27 | 56bb06d70b12778f7770334bbd3707fc6b1905be | 2022-05-15T12:59:23.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_maxsimdel_seed27 | 2 | null | transformers | 25,971 | Entry not found |
PSW/cnndm_0.1percent_randomsimdel_seed42 | 5c13626fb426b8dee24359bc6b46f29632325617 | 2022-05-15T17:28:26.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_randomsimdel_seed42 | 2 | null | transformers | 25,972 | Entry not found |
ali-issa/4-wav2vec2-arabic-gpu-colab-similar-to-german-less-warm-ups | da81bf44f63df5573af85e3b25f84f66f97d9fb5 | 2022-05-16T01:51:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ali-issa | null | ali-issa/4-wav2vec2-arabic-gpu-colab-similar-to-german-less-warm-ups | 2 | null | transformers | 25,973 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-arabic-gpu-colab-similar-to-german-less-warm-ups
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-arabic-gpu-colab-similar-to-german-less-warm-ups
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6937
- Wer: 0.4204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.1807 | 2.83 | 400 | 3.0778 | 1.0 |
| 2.9844 | 5.67 | 800 | 2.8777 | 1.0 |
| 2.5142 | 8.51 | 1200 | 1.2195 | 0.8743 |
| 1.1035 | 11.35 | 1600 | 0.7026 | 0.6095 |
| 0.7302 | 14.18 | 2000 | 0.6435 | 0.5437 |
| 0.5551 | 17.02 | 2400 | 0.6070 | 0.4874 |
| 0.4428 | 19.85 | 2800 | 0.5915 | 0.4551 |
| 0.3592 | 22.69 | 3200 | 0.5830 | 0.4416 |
| 0.3033 | 25.53 | 3600 | 0.6089 | 0.4375 |
| 0.2618 | 28.37 | 4000 | 0.6523 | 0.4334 |
| 0.2328 | 31.2 | 4400 | 0.6716 | 0.4193 |
| 0.2109 | 34.04 | 4800 | 0.6733 | 0.4281 |
| 0.1974 | 36.88 | 5200 | 0.6793 | 0.4269 |
| 0.1886 | 39.71 | 5600 | 0.6937 | 0.4204 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
CEBaB/t5-base.CEBaB.sa.3-class.inclusive.seed_42 | a6fe93f719abaf9813a27cf8403131e1feefe1ef | 2022-05-15T20:35:21.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.3-class.inclusive.seed_42 | 2 | null | transformers | 25,974 | Entry not found |
CEBaB/t5-base.CEBaB.sa.5-class.inclusive.seed_88 | 5d802822485c7a024560ec140aaad469949c46c3 | 2022-05-15T22:08:16.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.5-class.inclusive.seed_88 | 2 | null | transformers | 25,975 | Entry not found |
HenryAI/FAU-CORD19 | aac4a3bb231d3375177e268b4d89953fb3b4be91 | 2022-05-15T23:08:52.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | HenryAI | null | HenryAI/FAU-CORD19 | 2 | null | transformers | 25,976 | Entry not found |
PSW/cnndm_0.1percent_maxsimins_seed27 | a0edbfa9239a6a8ce65fcdf0e81ee38c64c366c3 | 2022-05-15T22:57:50.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_maxsimins_seed27 | 2 | null | transformers | 25,977 | Entry not found |
CEBaB/t5-base.CEBaB.sa.5-class.exclusive.seed_88 | e54fc80863c2534949dd5edd6194b68c83a3e09c | 2022-05-16T00:30:49.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.5-class.exclusive.seed_88 | 2 | null | transformers | 25,978 | Entry not found |
CEBaB/t5-base.CEBaB.sa.2-class.exclusive.seed_99 | 8b7e3b5b25143f72f0b25b02901ba7cb888ebbb2 | 2022-05-16T00:40:10.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.2-class.exclusive.seed_99 | 2 | null | transformers | 25,979 | Entry not found |
CEBaB/t5-base.CEBaB.sa.3-class.exclusive.seed_99 | 7fd74c96b81b20cc5eaa989af96ea552b30d4628 | 2022-05-16T00:49:40.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | CEBaB | null | CEBaB/t5-base.CEBaB.sa.3-class.exclusive.seed_99 | 2 | null | transformers | 25,980 | Entry not found |
PSW/cnndm_0.1percent_randomsimins_seed27 | c002c2f998a3f5e747f49532dcbe3f45feceafe6 | 2022-05-16T02:16:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_randomsimins_seed27 | 2 | null | transformers | 25,981 | Entry not found |
LDD/wwm | 3f0f9a601a8f6d8f7b3d02fef98f7b7039d2e494 | 2022-05-16T03:26:49.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | LDD | null | LDD/wwm | 2 | null | transformers | 25,982 | Entry not found |
dreamerdeo/ground-en-roberta-base | bc9b3b1ed9a2939aa2f0f89d42edd6ea6c7a3e95 | 2022-05-16T05:41:04.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dreamerdeo | null | dreamerdeo/ground-en-roberta-base | 2 | null | transformers | 25,983 | Entry not found |
PSW/cnndm_0.1percent_minmaxswap_seed42 | 4654b91f4dc7efa3cbf30ea17353713caa459e63 | 2022-05-16T06:44:37.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_minmaxswap_seed42 | 2 | null | transformers | 25,984 | Entry not found |
ceggian/sbert_pt_reddit_softmax_256 | 7772ca36d1eba75afb9cf570453bce378fbed342 | 2022-05-16T06:52:11.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ceggian | null | ceggian/sbert_pt_reddit_softmax_256 | 2 | null | sentence-transformers | 25,985 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 117759 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11775,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
PSW/cnndm_0.1percent_min2swap_seed27 | 965b7274b8848f8da10b0046d7094fd680034c73 | 2022-05-16T08:56:55.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_min2swap_seed27 | 2 | null | transformers | 25,986 | Entry not found |
PSW/cnndm_0.1percent_min2swap_seed42 | 53328c7fe9ece7ff8817a2b6834120523c0a34e6 | 2022-05-16T10:05:42.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_min2swap_seed42 | 2 | null | transformers | 25,987 | Entry not found |
PSW/cnndm_0.1percent_max2swap_seed42 | 4888bc90e100a3e5f76fa16dfcd1a7fcb7d1c819 | 2022-05-16T13:25:57.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_max2swap_seed42 | 2 | null | transformers | 25,988 | Entry not found |
huawei-noah/AutoTinyBERT-KD-S3 | bec642d680cfd863656443b93f5bf7e2fb6f5aa0 | 2022-05-16T15:13:45.000Z | [
"pytorch",
"transformers",
"license:other"
] | null | false | huawei-noah | null | huawei-noah/AutoTinyBERT-KD-S3 | 2 | null | transformers | 25,989 | ---
license: other
---
|
PSW/cnndm_0.1percent_randomswap_seed42 | 0bd0987a509d801112cecdfc645dd123672550dd | 2022-05-16T16:47:32.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.1percent_randomswap_seed42 | 2 | null | transformers | 25,990 | Entry not found |
PSW/cnndm_0.5percent_minsimdel_seed27 | e97a0a2cb2038c18990372f89dcdd16289c2fda9 | 2022-05-16T19:07:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_minsimdel_seed27 | 2 | null | transformers | 25,991 | Entry not found |
PSW/cnndm_0.5percent_minsimdel_seed42 | 8472c635fcd85c987b102c5d66fccaa205901f47 | 2022-05-16T20:20:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_minsimdel_seed42 | 2 | null | transformers | 25,992 | Entry not found |
evolvingstuff/bert-base-cased-wikitext2 | fbbb944928d9ce4de333f969d9dd94af9413eb97 | 2022-05-16T22:05:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | evolvingstuff | null | evolvingstuff/bert-base-cased-wikitext2 | 2 | null | transformers | 25,993 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0916 | 1.0 | 2346 | 7.0492 |
| 6.9039 | 2.0 | 4692 | 6.8751 |
| 6.8845 | 3.0 | 7038 | 6.8929 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
carlosaguayo/features_and_usecases_05162022_603 | 60201698c7a9309fa024333b45c275add756917d | 2022-05-16T22:03:05.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | carlosaguayo | null | carlosaguayo/features_and_usecases_05162022_603 | 2 | null | sentence-transformers | 25,994 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# carlosaguayo/features_and_usecases_05162022_603
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('carlosaguayo/features_and_usecases_05162022_603')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=carlosaguayo/features_and_usecases_05162022_603)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 175 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
PSW/cnndm_0.5percent_maxsimdel_seed27 | 8ad9eb355ca9d3eafaae892ea8b08a97139e5278 | 2022-05-16T22:39:58.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_maxsimdel_seed27 | 2 | null | transformers | 25,995 | Entry not found |
bkh6722/d-l-dl | d2a51927115efeeda6f173b5b5c69325cb056ede | 2022-05-17T16:09:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | bkh6722 | null | bkh6722/d-l-dl | 2 | null | transformers | 25,996 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d-l-dl
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4495
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 800
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 42.4143 | 49.8 | 100 | 21.5116 | 1.0 |
| 5.9884 | 99.8 | 200 | 31.7976 | 1.0 |
| 4.0043 | 149.8 | 300 | 3.4829 | 1.0 |
| 3.653 | 199.8 | 400 | 3.6417 | 1.0 |
| 3.5207 | 249.8 | 500 | 3.5081 | 1.0 |
| 3.63 | 299.8 | 600 | 3.4836 | 1.0 |
| 3.648 | 349.8 | 700 | 3.4515 | 1.0 |
| 3.6448 | 399.8 | 800 | 3.4647 | 1.0 |
| 3.6872 | 449.8 | 900 | 3.4371 | 1.0 |
| 3.6892 | 499.8 | 1000 | 3.4337 | 1.0 |
| 3.684 | 549.8 | 1100 | 3.4375 | 1.0 |
| 3.6843 | 599.8 | 1200 | 3.4452 | 1.0 |
| 3.6842 | 649.8 | 1300 | 3.4416 | 1.0 |
| 3.6819 | 699.8 | 1400 | 3.4498 | 1.0 |
| 3.6832 | 749.8 | 1500 | 3.4524 | 1.0 |
| 3.6828 | 799.8 | 1600 | 3.4495 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
SebastianS/distilbert-base-uncased-finetuned-squad-d5716d28 | b9792fe9af050cdf7d50859a3a893e21aae35727 | 2022-05-17T01:17:00.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:squad",
"arxiv:1910.01108",
"transformers",
"question-answering",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | SebastianS | null | SebastianS/distilbert-base-uncased-finetuned-squad-d5716d28 | 2 | null | transformers | 25,997 | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
lilitket/20220517-045629 | 6e7542df8a8b7dfcc77cb9f17018fbbb59f34494 | 2022-05-17T03:34:58.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220517-045629 | 2 | null | transformers | 25,998 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 20220517-045629
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20220517-045629
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3700
- Wer: 0.4581
- Cer: 0.0854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1339
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 5.238 | 0.29 | 200 | 3.1770 | 1.0 | 1.0 |
| 2.165 | 0.59 | 400 | 0.7309 | 0.7144 | 0.1543 |
| 0.7022 | 0.88 | 600 | 0.4614 | 0.5521 | 0.1058 |
| 0.5114 | 1.17 | 800 | 0.4202 | 0.4998 | 0.0965 |
| 0.4482 | 1.47 | 1000 | 0.3786 | 0.4645 | 0.0877 |
| 0.4082 | 1.76 | 1200 | 0.3700 | 0.4581 | 0.0854 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
|
PSW/cnndm_0.5percent_randomsimdel_seed27 | f477112aa15a02f5a9cffb8daf0203d6bcd6320e | 2022-05-17T02:13:35.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/cnndm_0.5percent_randomsimdel_seed27 | 2 | null | transformers | 25,999 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.