modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jaeyoung/klue_mln_train_only_train | 866e9704bd6b2624bae6e64efccc8af49cbdd7ab | 2021-10-05T17:55:14.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jaeyoung | null | jaeyoung/klue_mln_train_only_train | 0 | null | transformers | 35,400 | Entry not found |
jaeyoung/xlmroberta-klue_mln_train_only_train | 82a2a676cf506a1d386b21fd2b0d853308291ef3 | 2021-10-06T01:35:36.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jaeyoung | null | jaeyoung/xlmroberta-klue_mln_train_only_train | 0 | null | transformers | 35,401 | Entry not found |
jalensmh/DialoGPT-small-exophoria | c788527c91ee833c4ba6740a5a3a29db2707ab38 | 2021-09-02T22:05:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | jalensmh | null | jalensmh/DialoGPT-small-exophoria | 0 | null | transformers | 35,402 | ---
tags:
- conversational
---
# exophoria DialoGPT Model |
jamescalam/bert-base-dv | bc4c0034a666b947abbc2fbdf15ed23794afa0b7 | 2021-12-13T13:36:29.000Z | [
"pytorch",
"bert",
"fill-mask",
"dv",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | jamescalam | null | jamescalam/bert-base-dv | 0 | 2 | transformers | 35,403 | ---
language:
- dv
license: apache-2.0
---
# BERT base for Dhivehi
Pretrained model on Dhivehi language using masked language modeling (MLM).
## Tokenizer
The *WordPiece* tokenizer uses several components:
* **Normalization**: lowercase and then NFKD unicode normalization.
* **Pretokenization**: splits by whitespace and punctuation.
* **Postprocessing**: single sentences are output in format `[CLS] sentence A [SEP]` and pair sentences in format `[CLS] sentence A [SEP] sentence B [SEP]`.
## Training
Training was performed over 16M+ Dhivehi sentences/paragraphs put together by [@ashraq](https://huggingface.co/ashraq). An Adam optimizer with weighted decay was used with following parameters:
* Learning rate: 1e-5
* Weight decay: 0.1
* Warmup steps: 10% of data
|
jamiewjm/CCGwGPT2extep3 | 3cda86b992857621884d03fafe2c68d7dd6b58c8 | 2021-11-26T02:41:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jamiewjm | null | jamiewjm/CCGwGPT2extep3 | 0 | null | transformers | 35,404 | Entry not found |
jamiewjm/CCGwGPT2extep3reduce | b5a66485b9c2c782d2766dd80b0923302593d15c | 2021-11-26T02:29:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jamiewjm | null | jamiewjm/CCGwGPT2extep3reduce | 0 | null | transformers | 35,405 | Entry not found |
jannesg/takalane_tsn_roberta | 86add1af71ba1c8df91dda133a1c7f62a136d1a8 | 2021-09-22T08:52:11.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"tn",
"transformers",
"masked-lm",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | jannesg | null | jannesg/takalane_tsn_roberta | 0 | null | transformers | 35,406 | ---
language:
- tn
thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
tags:
- tn
- fill-mask
- pytorch
- roberta
- masked-lm
license: mit
---
# Takalani Sesame - Tswana 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_tsn_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_tsn_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 10000
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
jatinshah/bert-finetuned-squad | bafda7bcc325d27c9d520a921c49c9167a9aba44 | 2022-02-15T02:37:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | jatinshah | null | jatinshah/bert-finetuned-squad | 0 | null | transformers | 35,407 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0a0+0aef44c
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jatinshah/marian-finetuned-kde4-en-to-fr | 3c8cdff0b4d7f51a7ef582a02398c08cd3ed2f49 | 2022-02-14T05:47:21.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | jatinshah | null | jatinshah/marian-finetuned-kde4-en-to-fr | 0 | null | transformers | 35,408 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8815
- Score: 52.2204
- Counts: [166010, 120787, 91973, 70929]
- Totals: [228361, 207343, 189354, 173335]
- Precisions: [72.69630103213771, 58.254679444205976, 48.57198686058916, 40.92018345977443]
- Bp: 0.9695
- Sys Len: 228361
- Ref Len: 235434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0a0+0aef44c
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jawaharreddy247/wav2vec2-large-xlsr-hindhi-demo-colab | ecef9a734917c5553f5757db04a04c7c0322dc30 | 2021-10-28T12:30:01.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jawaharreddy247 | null | jawaharreddy247/wav2vec2-large-xlsr-hindhi-demo-colab | 0 | null | transformers | 35,409 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-hindhi-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-hindhi-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jaynlp/t5-large-transferqa | a95e221dfc82b79bb6d8551f1d4e3ce9af600ae0 | 2022-02-17T11:08:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2109.04655",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jaynlp | null | jaynlp/t5-large-transferqa | 0 | 1 | transformers | 35,410 | Reproduced TransferQA paper pre-trained weights.
https://arxiv.org/abs/2109.04655 |
jcmc/wav2vec2-xls-r-300m-jp | 13d54e22929c5d8480b10cf950d6565769133aa5 | 2022-01-26T07:22:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | jcmc | null | jcmc/wav2vec2-xls-r-300m-jp | 0 | null | transformers | 35,411 | Entry not found |
jcsilva/wav2vec2-base-timit-demo-colab | 618d063f7ddf3d3322a0c8c9830ff4a46ea6202d | 2021-12-18T13:45:19.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jcsilva | null | jcsilva/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 35,412 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7665
- Wer: 0.6956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.052 | 0.8 | 100 | 3.0167 | 1.0 |
| 2.7436 | 1.6 | 200 | 1.9369 | 1.0006 |
| 1.4182 | 2.4 | 300 | 0.7665 | 0.6956 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jeanlks/DialogGPT-small-pato | 48396ddea5261f6927946d7765b9bece8453225d | 2021-09-20T14:23:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | jeanlks | null | jeanlks/DialogGPT-small-pato | 0 | null | transformers | 35,413 | ---
tags:
- conversational
---
#Chatbot pato |
jenspt/byt5_ft_error_only | 572e22ad98fd887918023b1ccddb1fdb6b9b6643 | 2021-11-26T14:26:39.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jenspt | null | jenspt/byt5_ft_error_only | 0 | null | transformers | 35,414 | Entry not found |
jenspt/mln_ft | ba9c7030c9414c2cd733e027ddbbcf3ef0b82918 | 2021-12-05T15:52:50.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jenspt | null | jenspt/mln_ft | 0 | null | transformers | 35,415 | Entry not found |
jfarray/Model_all-distilroberta-v1_100_Epochs | af9e35efaa95bcd64f9bb9f00b800117d827da9c | 2022-02-13T20:50:24.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_all-distilroberta-v1_100_Epochs | 0 | null | sentence-transformers | 35,416 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_all-distilroberta-v1_30_Epochs | 1e762b3309492c484394ea81ac17bd33ebf26d82 | 2022-02-13T20:00:26.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_all-distilroberta-v1_30_Epochs | 0 | null | sentence-transformers | 35,417 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_all-distilroberta-v1_50_Epochs | 38848063ce504fb8246aeb3e20e800c0812db432 | 2022-02-13T20:18:37.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_all-distilroberta-v1_50_Epochs | 0 | null | sentence-transformers | 35,418 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_bert-base-multilingual-uncased_10_Epochs | 888e780a84f0919539431c7543048e4f385349ea | 2022-02-13T23:21:43.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_bert-base-multilingual-uncased_10_Epochs | 0 | null | sentence-transformers | 35,419 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_bert-base-multilingual-uncased_50_Epochs | 4eaf53db5af1ee5d0732b39adc763ab21e060a3f | 2022-02-14T19:44:38.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_bert-base-multilingual-uncased_50_Epochs | 0 | null | sentence-transformers | 35,420 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_30_Epochs | 65c592ac61fe265cc0310f7cbc1c2cb04341af17 | 2022-02-14T21:20:47.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_30_Epochs | 0 | null | sentence-transformers | 35,421 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_distiluse-base-multilingual-cased-v1_10_Epochs | 9efa42387c3fc9c17b59c6fc8b99bb4d23cd4aa5 | 2022-02-12T13:53:59.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_distiluse-base-multilingual-cased-v1_10_Epochs | 0 | null | sentence-transformers | 35,422 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_distiluse-base-multilingual-cased-v1_5_Epochs | fef5291e81ae3fb22e77769323f35e52d610ca1d | 2022-02-12T13:43:01.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | jfarray | null | jfarray/Model_distiluse-base-multilingual-cased-v1_5_Epochs | 0 | null | sentence-transformers | 35,423 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_10_Epochs | c36d6a25fff0e82b5ccf7856a3bc408a6cb15e98 | 2022-02-12T20:47:55.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jfarray | null | jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_10_Epochs | 0 | null | sentence-transformers | 35,424 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_30_Epochs | 3a5d92f8db442fb79f21ac39e458a17b47efe136 | 2022-02-12T21:00:41.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jfarray | null | jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_30_Epochs | 0 | null | sentence-transformers | 35,425 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_10_Epochs | c7cb5f1856de0b01c8deb323e68f2019789fe9ee | 2022-02-12T22:32:17.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jfarray | null | jfarray/Model_paraphrase-multilingual-mpnet-base-v2_10_Epochs | 0 | null | sentence-transformers | 35,426 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_50_Epochs | 249e245da9bd2ee78711a33c7fc5d2561e68a8f7 | 2022-02-12T23:39:31.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jfarray | null | jfarray/Model_paraphrase-multilingual-mpnet-base-v2_50_Epochs | 0 | null | sentence-transformers | 35,427 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jfhr1999/CharacterTest | 34ec9fe5989b4c8280cf5ed88edbf1e1045055cd | 2021-07-02T17:47:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | jfhr1999 | null | jfhr1999/CharacterTest | 0 | null | transformers | 35,428 | ---
tags:
- conversational
---
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jfhr1999/CharacterTest")
model = AutoModelWithLMHead.from_pretrained("jfhr1999/CharacterTest")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
jgammack/MTL-bert-base-uncased-ww | 3454496bbffc403986a43775673729409aee70a4 | 2022-02-08T17:50:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | jgammack | null | jgammack/MTL-bert-base-uncased-ww | 0 | null | transformers | 35,429 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MTL-bert-base-uncased-ww
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-bert-base-uncased-ww
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2964 | 1.0 | 99 | 2.9560 |
| 3.0419 | 2.0 | 198 | 2.8336 |
| 2.8979 | 3.0 | 297 | 2.8009 |
| 2.8815 | 4.0 | 396 | 2.7394 |
| 2.8373 | 5.0 | 495 | 2.6813 |
| 2.741 | 6.0 | 594 | 2.6270 |
| 2.6877 | 7.0 | 693 | 2.5216 |
| 2.6823 | 8.0 | 792 | 2.5485 |
| 2.6326 | 9.0 | 891 | 2.5690 |
| 2.5976 | 10.0 | 990 | 2.6336 |
| 2.6009 | 11.0 | 1089 | 2.5919 |
| 2.5615 | 12.0 | 1188 | 2.4264 |
| 2.5826 | 13.0 | 1287 | 2.5562 |
| 2.5693 | 14.0 | 1386 | 2.5529 |
| 2.5494 | 15.0 | 1485 | 2.5300 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/MTL-roberta-base | 64bd9accf802b7c8ab8e4180c5ffbccc9496402b | 2022-02-07T22:45:49.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | jgammack | null | jgammack/MTL-roberta-base | 0 | null | transformers | 35,430 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: MTL-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8338 | 1.0 | 98 | 1.6750 |
| 1.7732 | 2.0 | 196 | 1.6229 |
| 1.7208 | 3.0 | 294 | 1.6131 |
| 1.6917 | 4.0 | 392 | 1.5936 |
| 1.6579 | 5.0 | 490 | 1.6183 |
| 1.6246 | 6.0 | 588 | 1.6015 |
| 1.6215 | 7.0 | 686 | 1.5248 |
| 1.5743 | 8.0 | 784 | 1.5454 |
| 1.5621 | 9.0 | 882 | 1.5925 |
| 1.5652 | 10.0 | 980 | 1.5213 |
| 1.5615 | 11.0 | 1078 | 1.4845 |
| 1.5349 | 12.0 | 1176 | 1.5443 |
| 1.5165 | 13.0 | 1274 | 1.5304 |
| 1.5164 | 14.0 | 1372 | 1.4773 |
| 1.5293 | 15.0 | 1470 | 1.5537 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/SAE-roberta-base-squad | a67cec7fc04405033f80b137414840e0bdcb1d58 | 2022-02-08T11:17:55.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | jgammack | null | jgammack/SAE-roberta-base-squad | 0 | null | transformers | 35,431 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: SAE-roberta-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-roberta-base-squad
This model is a fine-tuned version of [jgammack/SAE-roberta-base](https://huggingface.co/jgammack/SAE-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/distilbert-base-uncased-squad | 0392553a2ee05867b60b636bb0340b64f87f48a2 | 2022-02-08T01:36:38.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | jgammack | null | jgammack/distilbert-base-uncased-squad | 0 | null | transformers | 35,432 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/roberta-base-squad | 03cdc6f3863369f684a9e39a875959845113bd65 | 2022-02-08T07:39:59.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | jgammack | null | jgammack/roberta-base-squad | 0 | null | transformers | 35,433 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jhemmingsson/lab2 | b48a38b1917fb1ba41c606c4f1cc13079d5e80ce | 2021-12-13T23:07:17.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jhemmingsson | null | jhemmingsson/lab2 | 0 | null | sentence-transformers | 35,434 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jhemmingsson/lab2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jhemmingsson/lab2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhemmingsson/lab2')
model = AutoModel.from_pretrained('jhemmingsson/lab2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jhemmingsson/lab2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 357 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom | 90f6ce68df655dd2ebff32d542c733506c2dad13 | 2022-01-27T14:58:01.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"model-index"
] | automatic-speech-recognition | false | jhonparra18 | null | jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom | 0 | null | transformers | 35,435 | ---
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-custom
This model was trained from scratch on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2245
- eval_wer: 0.2082
- eval_runtime: 801.6784
- eval_samples_per_second: 18.822
- eval_steps_per_second: 2.354
- epoch: 0.76
- step: 8400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
jihji/koelectra-base-klue-mrc | fdbdefb80a3d5f292cadf35ce30edd32f1eb2782 | 2021-09-02T09:15:16.000Z | [
"pytorch"
] | null | false | jihji | null | jihji/koelectra-base-klue-mrc | 0 | null | null | 35,436 | Entry not found |
jihopark/colloquial | 96268c07120d0c8223aad4ded5d082a419cbd178 | 2021-05-23T05:54:19.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jihopark | null | jihopark/colloquial | 0 | null | transformers | 35,437 | Entry not found |
jimregan/wav2vec2-large-xlsr-slovakian | 96b1d77fab81e307c9888e89358c676706090527 | 2021-07-06T06:54:45.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | jimregan | null | jimregan/wav2vec2-large-xlsr-slovakian | 0 | null | transformers | 35,438 | Entry not found |
jinbbong/kbert_base_esg_e10 | 938fcb5b00f025d1befbd3e7f91e75125e54e1ba | 2021-11-03T12:52:18.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinbbong | null | jinbbong/kbert_base_esg_e10 | 0 | null | transformers | 35,439 | Entry not found |
jinbbong/kbert_base_esg_e3 | c60c5f4d06d791fc2231e2536ec2aaa3d2b0806d | 2021-11-02T23:29:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinbbong | null | jinbbong/kbert_base_esg_e3 | 0 | null | transformers | 35,440 | Entry not found |
jindongwang/opus-mt-en-ro-finetuned-en-to-ro | 949f60b2185be1aa6f8f611488c2791a3e75c83e | 2022-02-10T08:32:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jindongwang | null | jindongwang/opus-mt-en-ro-finetuned-en-to-ro | 0 | null | transformers | 35,441 | Entry not found |
jinlmsft/t5-large-domain-detect | 4067ec19b8a5d02b46d551e0209fb61527ec1b5f | 2022-01-30T07:47:26.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | jinlmsft | null | jinlmsft/t5-large-domain-detect | 0 | null | transformers | 35,442 | Entry not found |
jinmang2/roberta-large-re-tapt-20300 | 08f2ea297f42263dc437aab04061d85ce637aa4a | 2021-10-06T07:10:00.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jinmang2 | null | jinmang2/roberta-large-re-tapt-20300 | 0 | null | transformers | 35,443 | Entry not found |
jiobiala24/wav2vec2-base-checkpoint-1 | 3ae82ebd1277c0682b7eb50106a2c2654d36a929 | 2022-01-06T09:39:38.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-1 | 0 | null | transformers | 35,444 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-TPU-cv-fine-tune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-TPU-cv-fine-tune
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6987
- Wer: 0.6019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1017 | 8.88 | 400 | 1.4635 | 0.7084 |
| 0.436 | 17.77 | 800 | 1.4765 | 0.6231 |
| 0.1339 | 26.66 | 1200 | 1.6987 | 0.6019 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jiobiala24/wav2vec2-base-checkpoint-12 | 706104ef91be96f28befc6095bad66b599c1d12f | 2022-02-12T23:02:04.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-12 | 0 | null | transformers | 35,445 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-12
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-11.1](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-11.1) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0795
- Wer: 0.3452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2793 | 1.64 | 1000 | 0.5692 | 0.3518 |
| 0.2206 | 3.28 | 2000 | 0.6127 | 0.3460 |
| 0.1733 | 4.93 | 3000 | 0.6622 | 0.3580 |
| 0.1391 | 6.57 | 4000 | 0.6768 | 0.3519 |
| 0.1193 | 8.21 | 5000 | 0.7559 | 0.3540 |
| 0.1053 | 9.85 | 6000 | 0.7873 | 0.3562 |
| 0.093 | 11.49 | 7000 | 0.8170 | 0.3612 |
| 0.0833 | 13.14 | 8000 | 0.8682 | 0.3579 |
| 0.0753 | 14.78 | 9000 | 0.8317 | 0.3573 |
| 0.0698 | 16.42 | 10000 | 0.9213 | 0.3525 |
| 0.0623 | 18.06 | 11000 | 0.9746 | 0.3531 |
| 0.0594 | 19.7 | 12000 | 1.0027 | 0.3502 |
| 0.0538 | 21.35 | 13000 | 1.0045 | 0.3545 |
| 0.0504 | 22.99 | 14000 | 0.9821 | 0.3523 |
| 0.0461 | 24.63 | 15000 | 1.0818 | 0.3462 |
| 0.0439 | 26.27 | 16000 | 1.0995 | 0.3495 |
| 0.0421 | 27.91 | 17000 | 1.0533 | 0.3430 |
| 0.0415 | 29.56 | 18000 | 1.0795 | 0.3452 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jiobiala24/wav2vec2-base-checkpoint-9 | e7ef05e1afcee7a76d0488b389d31b1fc58a7846 | 2022-01-25T19:52:35.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-9 | 0 | null | transformers | 35,446 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-9
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-8](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-8) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9203
- Wer: 0.3258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2783 | 1.58 | 1000 | 0.5610 | 0.3359 |
| 0.2251 | 3.16 | 2000 | 0.5941 | 0.3374 |
| 0.173 | 4.74 | 3000 | 0.6026 | 0.3472 |
| 0.1475 | 6.32 | 4000 | 0.6750 | 0.3482 |
| 0.1246 | 7.9 | 5000 | 0.6673 | 0.3414 |
| 0.1081 | 9.48 | 6000 | 0.7072 | 0.3409 |
| 0.1006 | 11.06 | 7000 | 0.7413 | 0.3392 |
| 0.0879 | 12.64 | 8000 | 0.7831 | 0.3394 |
| 0.0821 | 14.22 | 9000 | 0.7371 | 0.3333 |
| 0.0751 | 15.8 | 10000 | 0.8321 | 0.3445 |
| 0.0671 | 17.38 | 11000 | 0.8362 | 0.3357 |
| 0.0646 | 18.96 | 12000 | 0.8709 | 0.3367 |
| 0.0595 | 20.54 | 13000 | 0.8352 | 0.3321 |
| 0.0564 | 22.12 | 14000 | 0.8854 | 0.3323 |
| 0.052 | 23.7 | 15000 | 0.9031 | 0.3315 |
| 0.0485 | 25.28 | 16000 | 0.9171 | 0.3278 |
| 0.046 | 26.86 | 17000 | 0.9390 | 0.3254 |
| 0.0438 | 28.44 | 18000 | 0.9203 | 0.3258 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jivatneet/bert-mlm-batchsize8 | e0ce8351e1cded2e214386de931160924a26edea | 2021-07-09T06:35:04.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jivatneet | null | jivatneet/bert-mlm-batchsize8 | 0 | null | transformers | 35,447 | BERT MLM
|
jje1113/wav2vec2-base-timit-demo | 6e66337855fee3f1032c5000ecc1c5bc3242ebba | 2022-02-07T15:21:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | jje1113 | null | jje1113/wav2vec2-base-timit-demo | 0 | null | transformers | 35,448 | Entry not found |
jky594176/BART2_GRU | 75a1db3b712dc6de35fdd9c5425105155f08c970 | 2021-05-31T19:41:21.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jky594176 | null | jky594176/BART2_GRU | 0 | null | transformers | 35,449 | Entry not found |
jky594176/recipe_BART1_GRU | 0a29db1872dd5d43893c055dbb554c8ad44b01e5 | 2021-05-31T05:50:11.000Z | [
"pytorch",
"bart",
"text-generation",
"transformers"
] | text-generation | false | jky594176 | null | jky594176/recipe_BART1_GRU | 0 | null | transformers | 35,450 | Entry not found |
jky594176/recipe_BART2GRU_0601_BCElogitsloss | c0b6c982a51c5d21dbc6e8d6dc8350b215423dca | 2021-06-01T05:28:27.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jky594176 | null | jky594176/recipe_BART2GRU_0601_BCElogitsloss | 0 | null | transformers | 35,451 | Entry not found |
jky594176/recipe_BART2GRU_0601_BCEloss | cdc7029554f810fa345e91c78341cf926a7d352c | 2021-06-01T05:29:42.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jky594176 | null | jky594176/recipe_BART2GRU_0601_BCEloss | 0 | null | transformers | 35,452 | Entry not found |
jky594176/recipe_GPT2 | db4168a2f3d98739d5b1e8681846cbeabda52511 | 2021-05-30T16:46:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jky594176 | null | jky594176/recipe_GPT2 | 0 | null | transformers | 35,453 | Entry not found |
jky594176/recipe_bart2_v3 | dec01e292cfa1dadb9998a14afb7ca333c07761e | 2021-06-01T06:37:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jky594176 | null | jky594176/recipe_bart2_v3 | 0 | null | transformers | 35,454 | Entry not found |
jmamou/gpt2-medium-IMDB | c2a67e6adcf965f1e71fe347567da27571549b0f | 2021-08-23T13:24:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jmamou | null | jmamou/gpt2-medium-IMDB | 0 | null | transformers | 35,455 | Entry not found |
jmamou/gpt2-medium-SST-2 | caa5d21332ee4293ad259d66415649bbe684370b | 2021-08-23T13:29:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jmamou | null | jmamou/gpt2-medium-SST-2 | 0 | null | transformers | 35,456 | Entry not found |
joehdownardkainos/autonlp-intent-modelling-21895237 | db88fd1d5ef0ece9d935d4c9db70057249c013a0 | 2021-10-21T11:29:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"unk",
"dataset:joehdownardkainos/autonlp-data-intent-modelling",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | joehdownardkainos | null | joehdownardkainos/autonlp-intent-modelling-21895237 | 0 | null | transformers | 35,457 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- joehdownardkainos/autonlp-data-intent-modelling
co2_eq_emissions: 1.5688902203257171
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 21895237
- CO2 Emissions (in grams): 1.5688902203257171
## Validation Metrics
- Loss: 1.6614878177642822
- Rouge1: 32.4158
- Rouge2: 24.6194
- RougeL: 29.9278
- RougeLsum: 29.4988
- Gen Len: 58.7778
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/joehdownardkainos/autonlp-intent-modelling-21895237
``` |
joheras/xls-r-ab-spanish | 8e8de22d41c3e99f57c57f681b5976057235e423 | 2022-01-21T15:42:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | joheras | null | joheras/xls-r-ab-spanish | 0 | null | transformers | 35,458 | ---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8790
- Wer: 1.3448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
johnpaulbin/gpt2-skript-1m-v5 | 9ca22bfc8685c85870c8fbeaac339f0386b71a51 | 2021-07-24T20:56:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | johnpaulbin | null | johnpaulbin/gpt2-skript-1m-v5 | 0 | null | transformers | 35,459 | ## GPT-2 for Skript
## Complete your Skript automatically via a finetuned GPT-2 model
`0.57` Training loss on about 2 epochs (in total)
1.2 million lines of Skript is inside the dataset.
Inference Colab: https://colab.research.google.com/drive/1ujtLt7MOk7Nsag3q-BYK62Kpoe4Lr4PE |
johnpaulbin/gpt2-skript-base | 955b8bf96cdd0a4f1cc517d7e06a427ed6b93849 | 2021-07-15T07:14:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | johnpaulbin | null | johnpaulbin/gpt2-skript-base | 0 | null | transformers | 35,460 | GPT2 for Minecraft Plugin Skript (50,000 Lines, 3 GB: GPT-Large model finetune)
Inference Colab: https://colab.research.google.com/drive/1z8dwtNP8Kj3evEOmKmGBHK_vmP30lgiY |
jonpodtu/02sparseOverlapConvTasNet_SUM_2spk_8k | cf506809abd86b072c316723e05bd6d9ade5eda9 | 2021-06-23T14:56:56.000Z | [
"pytorch"
] | null | false | jonpodtu | null | jonpodtu/02sparseOverlapConvTasNet_SUM_2spk_8k | 0 | null | null | 35,461 | The following model is trained on the SUM partition of 20% overlapping mixtures |
jonpodtu/04sparseOverlapConvTasNet_SUM_2spk_8k | 9c74c590af419b8e930cc12e3f3e0c769cd39df4 | 2021-06-23T14:58:31.000Z | [
"pytorch"
] | null | false | jonpodtu | null | jonpodtu/04sparseOverlapConvTasNet_SUM_2spk_8k | 0 | null | null | 35,462 | Entry not found |
jonpodtu/06sparseOverlapConvTasNet_SUM_2spk_8k | 92d1c02c75ce859f2e956b4f6378a94b9bba002d | 2021-06-23T15:00:27.000Z | [
"pytorch"
] | null | false | jonpodtu | null | jonpodtu/06sparseOverlapConvTasNet_SUM_2spk_8k | 0 | null | null | 35,463 | Entry not found |
jonpodtu/08sparseOverlapConvTasNet_SUM_2spk_8k | 1007929d2c660142afdcfebc4a48631897b32d2b | 2021-06-23T15:01:34.000Z | [
"pytorch"
] | null | false | jonpodtu | null | jonpodtu/08sparseOverlapConvTasNet_SUM_2spk_8k | 0 | null | null | 35,464 | Entry not found |
jonpodtu/0sparseOverlapConvTasNet_SUM_2spk_8k | de33d8c227e2eda4474974867cc25f93261a9f48 | 2021-06-23T14:54:00.000Z | [
"pytorch"
] | null | false | jonpodtu | null | jonpodtu/0sparseOverlapConvTasNet_SUM_2spk_8k | 0 | null | null | 35,465 | Entry not found |
jonpodtu/1sparseOverlapConvTasNet_SUM_2spk_8k | fa4b5c408ad3f4618c1efc6c9eed1a0101394a64 | 2021-06-23T14:55:19.000Z | [
"pytorch"
] | null | false | jonpodtu | null | jonpodtu/1sparseOverlapConvTasNet_SUM_2spk_8k | 0 | null | null | 35,466 | Entry not found |
jonpodtu/1sparseOverlapDPRNN_SUM_2spk_8k | 21426b568c6508a2c75b2157e99ee4a9359c3bbb | 2021-06-23T15:46:44.000Z | [
"pytorch"
] | null | false | jonpodtu | null | jonpodtu/1sparseOverlapDPRNN_SUM_2spk_8k | 0 | null | null | 35,467 | Entry not found |
jonpodtu/mixsparseOverlapConvTasNet_SUM_2spk_8k | 5e8ace7b8aa15a92425205f0f8740c1abdcf7e84 | 2021-06-23T15:47:50.000Z | [
"pytorch"
] | null | false | jonpodtu | null | jonpodtu/mixsparseOverlapConvTasNet_SUM_2spk_8k | 0 | null | null | 35,468 | Entry not found |
jonpodtu/mixsparseOverlapDPRNN_SUM_2spk_8k | 299cacb62f842acb6a8aa39ed11b8fb854d20f0f | 2021-06-23T15:47:14.000Z | [
"pytorch"
] | null | false | jonpodtu | null | jonpodtu/mixsparseOverlapDPRNN_SUM_2spk_8k | 0 | null | null | 35,469 | Entry not found |
jontooy/AraBERT32-Flickr8k | 45de03041774f9ffafdd3ca3d7616a3c476a8bc2 | 2022-06-06T13:07:23.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | fill-mask | false | jontooy | null | jontooy/AraBERT32-Flickr8k | 0 | null | transformers | 35,470 | ---
license: afl-3.0
---
|
josedlhm/trump_tweet | 89ff4f277638aafbfd3eb65cd088afd5e72fac47 | 2021-11-24T14:19:14.000Z | [
"pytorch",
"openai-gpt",
"text-generation",
"transformers"
] | text-generation | false | josedlhm | null | josedlhm/trump_tweet | 0 | null | transformers | 35,471 | Entry not found |
josepjulia/RepoHumanChatBot | 236651af2887c4c32fc7e48d3c28e7e047d7d7d0 | 2021-12-11T19:59:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | josepjulia | null | josepjulia/RepoHumanChatBot | 0 | null | transformers | 35,472 | ---
tags:
- conversational
---
# HumanChat Model |
josh8/DialoGPT-medium-josh | 227a1e080c0793b0b985336cc12bfa2b30acca1f | 2022-01-28T05:41:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | josh8 | null | josh8/DialoGPT-medium-josh | 0 | null | transformers | 35,473 | ---
tags:
- conversational
---
# Josh DialoGPT medium Bot |
josh8/DialoGPT-small-josh | 3dac7bd66cb38f743ca2ba3c1314123a3dc25a17 | 2022-01-28T04:51:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | josh8 | null | josh8/DialoGPT-small-josh | 0 | null | transformers | 35,474 | ---
tags:
- conversational
---
# Josh DialoGPT Model |
jppaolim/homerGPT2 | ae6018ad39a077624f056e420eca7e44541a0832 | 2021-05-23T06:05:48.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jppaolim | null | jppaolim/homerGPT2 | 0 | null | transformers | 35,475 | First model for storytelling
|
jpsxlr8/DialoGPT-small-harrypotter | 038f4eb94bd48176282afd7642203e17ae304ace | 2021-09-27T17:35:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | jpsxlr8 | null | jpsxlr8/DialoGPT-small-harrypotter | 0 | null | transformers | 35,476 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
jsm33/d4-primary | 949259b2840e31326ca59720a7c4c34e05fc9aa8 | 2021-05-30T18:19:39.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | jsm33 | null | jsm33/d4-primary | 0 | null | transformers | 35,477 | Entry not found |
juanhebert/wav2vec2-base-timit-demo-colab | 53b806beca264d2856500efedbda5259c85d5e43 | 2022-02-24T02:32:18.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | juanhebert | null | juanhebert/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 35,478 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.2385
- eval_wer: 1.0
- eval_runtime: 145.9952
- eval_samples_per_second: 11.507
- eval_steps_per_second: 1.438
- epoch: 0.25
- step: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
juliagsy/tapas-fine-tuned | 233c7e2c04082cd528e3240219bdaa2694ca0630 | 2022-01-31T07:38:13.000Z | [
"pytorch",
"tapas",
"table-question-answering",
"transformers"
] | table-question-answering | false | juliagsy | null | juliagsy/tapas-fine-tuned | 0 | null | transformers | 35,479 | Entry not found |
julien-c/ColabTest | 1f91c072d4736703f7ada9f6932cae4eb63faebf | 2022-01-27T14:28:57.000Z | [
"pytorch",
"tf",
"license:mit"
] | null | false | julien-c | null | julien-c/ColabTest | 0 | null | null | 35,480 | ---
license: mit
---
|
julien-c/dummy-model-from-colab | c72f32ba20a2914898964feb7e693456afd42249 | 2021-05-20T17:30:49.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | julien-c | null | julien-c/dummy-model-from-colab | 0 | null | transformers | 35,481 | Entry not found |
julien-c/t5-3b-fork | 0cd1f84a5a36cda36011bfe8bb1ec9df9ab484ba | 2020-11-20T15:39:22.000Z | [
"pytorch"
] | null | false | julien-c | null | julien-c/t5-3b-fork | 0 | null | null | 35,482 | Entry not found |
julien-c/voice-activity-detection | acc249138b33c0eaff12814b270c78e9fdddd923 | 2020-12-21T22:38:05.000Z | [
"pytorch",
"dataset:dihard",
"arxiv:1910.10655",
"pyannote",
"audio",
"voice-activity-detection",
"license:mit"
] | voice-activity-detection | false | julien-c | null | julien-c/voice-activity-detection | 0 | 1 | null | 35,483 | ---
tags:
- pyannote
- audio
- voice-activity-detection
datasets:
- dihard
license: mit
inference: false
---
## Example pyannote-audio Voice Activity Detection model
### `pyannote.audio.models.segmentation.PyanNet`
♻️ Imported from https://github.com/pyannote/pyannote-audio-hub
This model was trained by @hbredin.
### Demo: How to use in pyannote-audio
```python
from pyannote.audio.core.inference import Inference
model = Inference('julien-c/voice-activity-detection', device='cuda')
model({
"audio": "TheBigBangTheory.wav"
})
```
### Citing pyannote-audio
```BibTex
@inproceedings{Bredin2020,
Title = {{pyannote.audio: neural building blocks for speaker diarization}},
Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe},
Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing},
Address = {Barcelona, Spain},
Month = {May},
Year = {2020},
}
```
or
```bibtex
@inproceedings{Lavechin2020,
author = {Marvin Lavechin and Marie-Philippe Gill and Ruben Bousbib and Herv\'{e} Bredin and Leibny Paola Garcia-Perera},
title = {{End-to-end Domain-Adversarial Voice Activity Detection}},
year = {2020},
url = {https://arxiv.org/abs/1910.10655},
}
```
|
juliusco/biobert-base-cased-v1.1-squad-finetuned-biobert | dff6e0d79317c10d5b8324a9531a3ea3f79e8a65 | 2021-12-14T08:04:26.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | juliusco | null | juliusco/biobert-base-cased-v1.1-squad-finetuned-biobert | 0 | null | transformers | 35,484 | Entry not found |
juliusco/distilbert-base-uncased-finetuned-covdistilbert | f5a35111c147610df158622b81fbee46ec494121 | 2021-12-14T09:08:34.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | juliusco | null | juliusco/distilbert-base-uncased-finetuned-covdistilbert | 0 | null | transformers | 35,485 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: distilbert-base-uncased-finetuned-covdistilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-covdistilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the covid_qa_deepset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 457 | 0.5125 |
| 0.5146 | 2.0 | 914 | 0.4843 |
| 0.2158 | 3.0 | 1371 | 0.4492 |
| 0.1639 | 4.0 | 1828 | 0.4760 |
| 0.1371 | 5.0 | 2285 | 0.4844 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
junzai/bert_finetuning_test_hug | 35552c31a5e3c420e33be6df532582cd1a1f2527 | 2021-08-31T09:16:24.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | junzai | null | junzai/bert_finetuning_test_hug | 0 | null | transformers | 35,486 | Entry not found |
junzai/bert_funting_test_ai10 | eeeb76183fc15b9d2b7b664a96190805071381e5 | 2021-09-02T09:33:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | junzai | null | junzai/bert_funting_test_ai10 | 0 | null | transformers | 35,487 | Entry not found |
justin871030/bert-base-uncased-goemotions | a1e60a37f7ed2c3a786b84aa8423929135df8af5 | 2022-01-08T09:50:28.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | justin871030 | null | justin871030/bert-base-uncased-goemotions | 0 | null | transformers | 35,488 | Entry not found |
kSaluja/autonlp-tele_new_5k-557515810 | 7dc76de4ebe21ec56530656e4b0192144171b2d7 | 2022-02-08T20:58:51.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:kSaluja/autonlp-data-tele_new_5k",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | kSaluja | null | kSaluja/autonlp-tele_new_5k-557515810 | 0 | 1 | transformers | 35,489 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- kSaluja/autonlp-data-tele_new_5k
co2_eq_emissions: 2.96638567287195
---
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 557515810
- CO2 Emissions (in grams): 2.96638567287195
## Validation Metrics
- Loss: 0.12897901237010956
- Accuracy: 0.9713212700580403
- Precision: 0.9475614228089475
- Recall: 0.96274217585693
- F1: 0.9550914803178709
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kSaluja/autonlp-tele_new_5k-557515810
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("kSaluja/autonlp-tele_new_5k-557515810", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kSaluja/autonlp-tele_new_5k-557515810", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
kaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaot1k/DialoGPT-small-Wanda | 115f39b4f35f8238e7adcac626353dc59131ecc2 | 2021-10-18T16:45:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | kaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaot1k | null | kaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaot1k/DialoGPT-small-Wanda | 0 | null | transformers | 35,490 | ---
tags:
- conversational
---
#wanda bot go reeeeeeeeeeeeeeeeeeeeee |
kamilali/distilbert-base-uncased-finetuned-squad | f4fee81592a301db97cbfc6cf2138b8e9c33ab6f | 2022-03-06T03:16:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | kamilali | null | kamilali/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 35,491 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 0.5793 |
| No log | 2.0 | 2 | 0.1730 |
| No log | 3.0 | 3 | 0.1042 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
karthik19967829/XLM-R-af-model | 0d2c4960c66b9dc97c66970a4cc5b6939cc24c48 | 2022-02-03T07:52:45.000Z | [
"pytorch"
] | null | false | karthik19967829 | null | karthik19967829/XLM-R-af-model | 0 | null | null | 35,492 | Entry not found |
karthik19967829/XLM-R-es-model | ec3591a393cf73b395b508f16ae5f79f146e5a95 | 2022-02-03T08:29:24.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | karthik19967829 | null | karthik19967829/XLM-R-es-model | 0 | null | transformers | 35,493 | Entry not found |
karthik19967829/XLM-R-hy-model | 34142176b42f2b2ae7c357d41d255301798f56ce | 2022-02-03T08:35:57.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | karthik19967829 | null | karthik19967829/XLM-R-hy-model | 0 | null | transformers | 35,494 | Entry not found |
karthik19967829/XLM-R-lt-model | 4d92fc3c705d83e7fc99036fc3731b1754ec5f11 | 2022-02-03T08:43:28.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | karthik19967829 | null | karthik19967829/XLM-R-lt-model | 0 | null | transformers | 35,495 | Entry not found |
karthik19967829/XLM-R-ta-model | c518417ec4f48d83a3e766671f2dc49bd97205b7 | 2022-02-03T08:50:15.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | karthik19967829 | null | karthik19967829/XLM-R-ta-model | 0 | null | transformers | 35,496 | Entry not found |
kazandaev/opus-mt-en-ru-finetuned | f08b1fed350670e0aa791766b8a3aba82c7bd083 | 2022-02-27T22:31:49.000Z | [
"pytorch",
"tensorboard",
"rust",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | kazandaev | null | kazandaev/opus-mt-en-ru-finetuned | 0 | null | transformers | 35,497 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-ru-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ru-finetuned
This model is a fine-tuned version of [kazandaev/opus-mt-en-ru-finetuned](https://huggingface.co/kazandaev/opus-mt-en-ru-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7763
- Bleu: 41.0065
- Gen Len: 29.7548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 49
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 0.6903 | 1.0 | 35147 | 0.7779 | 40.9223 | 29.7846 |
| 0.6999 | 2.0 | 70294 | 0.7776 | 40.8267 | 29.8421 |
| 0.7257 | 3.0 | 105441 | 0.7769 | 40.8549 | 29.8765 |
| 0.7238 | 4.0 | 140588 | 0.7763 | 41.0225 | 29.7129 |
| 0.7313 | 5.0 | 175735 | 0.7763 | 41.0065 | 29.7548 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
kelvinih/pubmedbert-abs-base-lm-header | 54fc2bfa6891797be65f3f144a370212f2649f08 | 2021-09-21T07:06:53.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | kelvinih | null | kelvinih/pubmedbert-abs-base-lm-header | 0 | null | transformers | 35,498 | Entry not found |
kevinzyz/chinese-roberta-wwm-ext-finetuned-MC-hyper | 2714d4c804ffe739e31a090c75bb20be20421891 | 2021-12-10T07:32:54.000Z | [
"pytorch",
"bert",
"multiple-choice",
"transformers"
] | multiple-choice | false | kevinzyz | null | kevinzyz/chinese-roberta-wwm-ext-finetuned-MC-hyper | 0 | null | transformers | 35,499 | Entry not found |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.