modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ABBHISHEK/DialoGPT-small-harrypotter | 55264c63ce90e4221506aff8f18075fa821416eb | 2021-09-19T10:23:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ABBHISHEK | null | ABBHISHEK/DialoGPT-small-harrypotter | 1 | null | transformers | 27,700 | ---
tags:
- conversational
---
@Harry Potter DialoGPT model |
AG/pretraining | a2457008a315d393619c17efc0cd096516404608 | 2022-03-06T12:27:50.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AG | null | AG/pretraining | 1 | null | transformers | 27,701 | Pre trained on clus_ chapter only. |
AIDynamics/DialoGPT-medium-MentorDealerGuy | e9b9b778eb51765576c4cc022be27bd052ff3c30 | 2021-11-17T22:23:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AIDynamics | null | AIDynamics/DialoGPT-medium-MentorDealerGuy | 1 | null | transformers | 27,702 | ---
tags:
- conversational
---
# tests |
AKulk/wav2vec2-base-timit-epochs15 | 28dfe31d272d9414d6255f703b4e6d7f45c6ea74 | 2022-02-15T14:26:13.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | AKulk | null | AKulk/wav2vec2-base-timit-epochs15 | 1 | null | transformers | 27,703 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-epochs15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs15
This model is a fine-tuned version of [AKulk/wav2vec2-base-timit-epochs10](https://huggingface.co/AKulk/wav2vec2-base-timit-epochs10) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
AKulk/wav2vec2-base-timit-epochs5 | cfb1249637b3a8d58b2ef26168635d220d58fd1a | 2022-02-11T16:48:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | AKulk | null | AKulk/wav2vec2-base-timit-epochs5 | 1 | null | transformers | 27,704 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-epochs5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs5
This model is a fine-tuned version of [facebook/wav2vec2-lv-60-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-lv-60-espeak-cv-ft) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Aastha/wav2vec2-large-xls-r-1b-hi-v2 | 127bfd203619d0b22064aa35dd2180c006d7bc08 | 2022-02-09T23:22:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Aastha | null | Aastha/wav2vec2-large-xls-r-1b-hi-v2 | 1 | null | transformers | 27,705 | Entry not found |
Aastha/wav2vec2-large-xls-r-1b-hindi | e351458256f10d285b89393dcf6fa2fdca1538ae | 2022-02-11T20:20:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Aastha | null | Aastha/wav2vec2-large-xls-r-1b-hindi | 1 | null | transformers | 27,706 | Entry not found |
Aastha/wav2vec2-large-xls-r-300m-50-hi | 2739d33b5a1a128050c2a85a3bf32d676940f996 | 2022-02-09T22:59:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Aastha | null | Aastha/wav2vec2-large-xls-r-300m-50-hi | 1 | null | transformers | 27,707 | Entry not found |
Aastha/wav2vec2-large-xls-r-300m-hi-v2 | e1092bbdfe84b07d44d0bc14637a4b64bc961c56 | 2022-02-09T10:24:50.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Aastha | null | Aastha/wav2vec2-large-xls-r-300m-hi-v2 | 1 | null | transformers | 27,708 | Entry not found |
Aastha/wav2vec2-large-xls-r-300m-hi | cb6253499d14516f1cd3d9257ecae27d51a7efad | 2022-02-06T13:04:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Aastha | null | Aastha/wav2vec2-large-xls-r-300m-hi | 1 | null | transformers | 27,709 | Entry not found |
Aastha/wav2vec2-large-xlsr-53-hi | bdb6f40c0b858ff6d83bc8eb0d01f9937aa69655 | 2022-02-07T09:59:35.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Aastha | null | Aastha/wav2vec2-large-xlsr-53-hi | 1 | null | transformers | 27,710 | Entry not found |
AbderrahimRezki/HarryPotterBot | 53718f8988201cce701c94831eb1f019fe54faac | 2021-09-01T16:12:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | AbderrahimRezki | null | AbderrahimRezki/HarryPotterBot | 1 | null | transformers | 27,711 | Entry not found |
AbhinavSaiTheGreat/DialoGPT-small-harrypotter | 159497eacaa099a2be9406d68740edc3e7ee70dd | 2021-08-31T05:39:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AbhinavSaiTheGreat | null | AbhinavSaiTheGreat/DialoGPT-small-harrypotter | 1 | null | transformers | 27,712 | ---
tags:
- conversational
---
#HarryPotter DialoGPT Model |
AccurateIsaiah/DialoGPT-small-jefftastic | 52f1de48b2d8a26aa2f78e9fd4092c66441fb2c2 | 2021-11-23T19:45:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AccurateIsaiah | null | AccurateIsaiah/DialoGPT-small-jefftastic | 1 | null | transformers | 27,713 | ---
tags:
- conversational
---
# jeff's 100% authorized brain scan |
AccurateIsaiah/DialoGPT-small-mozark | f2276aa0c48937f9abb37d42ab5e3830a5bf1114 | 2021-11-22T21:24:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AccurateIsaiah | null | AccurateIsaiah/DialoGPT-small-mozark | 1 | null | transformers | 27,714 | ---
tags:
- conversational
---
# Mozark's Brain Uploaded to Hugging Face |
Adil617/wav2vec2-base-timit-demo-colab | f2c00e7516d847126ff6cae27609776a28bae77a | 2022-01-29T21:05:59.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Adil617 | null | Adil617/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 27,715 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9314
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 8.686 | 0.16 | 20 | 13.6565 | 1.0 |
| 8.0711 | 0.32 | 40 | 12.5379 | 1.0 |
| 6.9967 | 0.48 | 60 | 9.7215 | 1.0 |
| 5.2368 | 0.64 | 80 | 5.8459 | 1.0 |
| 3.4499 | 0.8 | 100 | 3.3413 | 1.0 |
| 3.1261 | 0.96 | 120 | 3.2858 | 1.0 |
| 3.0654 | 1.12 | 140 | 3.1945 | 1.0 |
| 3.0421 | 1.28 | 160 | 3.1296 | 1.0 |
| 3.0035 | 1.44 | 180 | 3.1172 | 1.0 |
| 3.0067 | 1.6 | 200 | 3.1217 | 1.0 |
| 2.9867 | 1.76 | 220 | 3.0715 | 1.0 |
| 2.9653 | 1.92 | 240 | 3.0747 | 1.0 |
| 2.9629 | 2.08 | 260 | 2.9984 | 1.0 |
| 2.9462 | 2.24 | 280 | 2.9991 | 1.0 |
| 2.9391 | 2.4 | 300 | 3.0391 | 1.0 |
| 2.934 | 2.56 | 320 | 2.9682 | 1.0 |
| 2.9193 | 2.72 | 340 | 2.9701 | 1.0 |
| 2.8985 | 2.88 | 360 | 2.9314 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
AdrianGzz/DialoGPT-small-harrypotter | 3e2781dc40e8779c3a6ee0367de4baf038efdbdb | 2021-10-11T21:52:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AdrianGzz | null | AdrianGzz/DialoGPT-small-harrypotter | 1 | null | transformers | 27,716 | ---
tags:
- conversational
---
# Harry Potter DialoGPT model |
Ajaykannan6/autonlp-manthan-16122692 | 8c1bc189faf33ae5f75c1274611c60e178da0fe5 | 2021-10-08T13:52:19.000Z | [
"pytorch",
"bart",
"text2text-generation",
"unk",
"dataset:Ajaykannan6/autonlp-data-manthan",
"transformers",
"autonlp",
"autotrain_compatible"
] | text2text-generation | false | Ajaykannan6 | null | Ajaykannan6/autonlp-manthan-16122692 | 1 | null | transformers | 27,717 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Ajaykannan6/autonlp-data-manthan
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 16122692
## Validation Metrics
- Loss: 1.1877621412277222
- Rouge1: 42.0713
- Rouge2: 23.3043
- RougeL: 37.3755
- RougeLsum: 37.8961
- Gen Len: 60.7117
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Ajaykannan6/autonlp-manthan-16122692
``` |
Al-Kohollik/DialoGPT-medium-chloeprice | c668193d5c90214a8a5b8df6a0b6fb94015381e9 | 2021-09-13T06:44:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Al-Kohollik | null | Al-Kohollik/DialoGPT-medium-chloeprice | 1 | null | transformers | 27,718 | ---
tags:
- conversational
---
# AI Chatbot model trained for Chloe Price from Life is Strange EP 1. |
AlbertHSU/BertTEST | d2f33bbfb1afeb8bfe3a8af327e0129483fda679 | 2022-01-10T13:58:47.000Z | [
"pytorch"
] | null | false | AlbertHSU | null | AlbertHSU/BertTEST | 1 | 1 | null | 27,719 | Entry not found |
Aleksandar/electra-srb-oscar | 87ff9dd00d22c1c465d22ecb7dae766c4150a191 | 2021-09-22T12:19:35.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | fill-mask | false | Aleksandar | null | Aleksandar/electra-srb-oscar | 1 | null | transformers | 27,720 | ---
tags:
- generated_from_trainer
model_index:
- name: electra-srb-oscar
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-oscar
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
Aleksandar1932/distilgpt2-rock | 4ad093643be264cd4e98efa3be0ffd320026bb84 | 2022-03-18T21:22:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Aleksandar1932 | null | Aleksandar1932/distilgpt2-rock | 1 | null | transformers | 27,721 | Entry not found |
Aleksandar1932/gpt2-country | 8d3a67d18ff80d6435a2d8c23f44afe5cb053ce7 | 2022-03-18T23:38:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Aleksandar1932 | null | Aleksandar1932/gpt2-country | 1 | null | transformers | 27,722 | Entry not found |
Aleksandar1932/gpt2-hip-hop | 2d24bf965958dcdb2bc948c7e697de373c4a8b62 | 2022-03-18T23:23:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Aleksandar1932 | null | Aleksandar1932/gpt2-hip-hop | 1 | null | transformers | 27,723 | Entry not found |
Alireza1044/dwight_bert_lm | 8869ddafc6f3899e5584aee95495930a54affd01 | 2021-07-08T16:54:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Alireza1044 | null | Alireza1044/dwight_bert_lm | 1 | null | transformers | 27,724 | Entry not found |
Amirosein/distilbert_v1 | 382004181c6c1560886773d6b87286fdbf071ed6 | 2021-09-13T16:37:16.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Amirosein | null | Amirosein/distilbert_v1 | 1 | null | transformers | 27,725 | Entry not found |
Andranik/TestQA2 | 3ba460f2c28577c35381a60a281f9a6c22c9820c | 2022-02-17T16:43:26.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Andranik | null | Andranik/TestQA2 | 1 | null | transformers | 27,726 | ---
tags:
- generated_from_trainer
model-index:
- name: electra_large_discriminator_squad2_512
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra_large_discriminator_squad2_512
This model is a fine-tuned version of [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Andranik/TestQaV1 | 10ad36dd0a247f61e3f2ce3de340e0a7ce5115e9 | 2022-02-17T13:50:04.000Z | [
"pytorch",
"rust",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Andranik | null | Andranik/TestQaV1 | 1 | null | transformers | 27,727 | Entry not found |
AndreLiu1225/t5-news | 687ebc613881f80d2dbf047080a4629793f303c4 | 2021-10-26T02:49:39.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | AndreLiu1225 | null | AndreLiu1225/t5-news | 1 | null | transformers | 27,728 | This is a pretrained model that was loaded from t5-base. It has been adapted and changed by changing the max_length and summary_length. |
Anonymous/ReasonBERT-TAPAS | 628f58a46dfa6feeacc084608e3dd3bc11b3688a | 2021-05-23T02:34:38.000Z | [
"pytorch",
"tapas",
"feature-extraction",
"transformers"
] | feature-extraction | false | Anonymous | null | Anonymous/ReasonBERT-TAPAS | 1 | null | transformers | 27,729 | Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx
This is based on tapas-base(no_reset) model and pre-trained for table input |
AnonymousSub/AR_cline | 1d20e3384b8135523ca500eca7315332f0781440 | 2022-01-12T11:50:55.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_cline | 1 | null | transformers | 27,730 | Entry not found |
AnonymousSub/AR_consert | e561e0a3a07658d1cc4e9bf2faa0d403d2c8709b | 2022-01-12T12:04:53.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_consert | 1 | null | transformers | 27,731 | Entry not found |
AnonymousSub/AR_declutr | b1883cec21bb64c39956b45b14ed3855333451c4 | 2022-01-12T12:09:19.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_declutr | 1 | null | transformers | 27,732 | Entry not found |
AnonymousSub/AR_rule_based_bert_quadruplet_epochs_1_shard_1 | fa2dc9306ac53153f270a5c5d80daf6c98b2402c | 2022-01-11T00:05:10.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_bert_quadruplet_epochs_1_shard_1 | 1 | null | transformers | 27,733 | Entry not found |
AnonymousSub/AR_rule_based_roberta_bert_quadruplet_epochs_1_shard_10 | 3927a27d749dcd5efcfd729af528f2da71cf5f53 | 2022-01-06T14:29:39.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_bert_quadruplet_epochs_1_shard_10 | 1 | null | transformers | 27,734 | Entry not found |
AnonymousSub/AR_rule_based_roberta_bert_triplet_epochs_1_shard_1 | 90188296c9d1f51250b09b9000a7c31865d48f53 | 2022-01-06T09:06:26.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_bert_triplet_epochs_1_shard_1 | 1 | null | transformers | 27,735 | Entry not found |
AnonymousSub/AR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1 | b328ec8902d806e0e0841df4b08efb59cf7c3fcf | 2022-01-06T11:30:34.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1 | 1 | null | transformers | 27,736 | Entry not found |
AnonymousSub/AR_rule_based_roberta_only_classfn_epochs_1_shard_10 | 2e44605ea8291b2df8e55b61b08226935a540d53 | 2022-01-06T21:33:16.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_only_classfn_epochs_1_shard_10 | 1 | null | transformers | 27,737 | Entry not found |
AnonymousSub/AR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1 | 0f50edfad6517ac5bd600ffd95fc31e6cb080dbb | 2022-01-06T07:32:53.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1 | 1 | null | transformers | 27,738 | Entry not found |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_10 | 9f710878a3da340c138882be210889ee30bec1b8 | 2022-01-06T08:31:00.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_10 | 1 | null | transformers | 27,739 | Entry not found |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1 | d812b1c51540c154f12cbf07575283e35314513a | 2022-01-06T21:52:13.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1 | 1 | null | transformers | 27,740 | Entry not found |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10 | 6f4652189992d91a6c58676cfdb7ce835ece6a42 | 2022-01-06T22:10:59.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10 | 1 | null | transformers | 27,741 | Entry not found |
AnonymousSub/AR_rule_based_twostagequadruplet_hier_epochs_1_shard_1 | 70ff9d4126f56f132a2e1fada4ed1ca13183bf16 | 2022-01-10T22:38:39.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_twostagequadruplet_hier_epochs_1_shard_1 | 1 | null | transformers | 27,742 | Entry not found |
AnonymousSub/AR_rule_based_twostagetriplet_epochs_1_shard_1 | 1aa5ff2888bee156be2490045a296fc7d54373a0 | 2022-01-11T00:57:57.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_twostagetriplet_epochs_1_shard_1 | 1 | null | transformers | 27,743 | Entry not found |
AnonymousSub/EManuals_BERT_squad2.0 | ab0f71e2cf7467fbb123d28cadfdda89790112b3 | 2022-01-17T16:38:05.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/EManuals_BERT_squad2.0 | 1 | null | transformers | 27,744 | Entry not found |
AnonymousSub/SR_SDR_HF_model_base | ee668cf39d7f2d7de0227f78b3b1f681844d9b36 | 2022-01-12T15:00:44.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_SDR_HF_model_base | 1 | null | transformers | 27,745 | Entry not found |
AnonymousSub/SR_consert | 8acbfab79730b3a1c157f44a397c6d50f52a3b70 | 2022-01-12T10:50:12.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_consert | 1 | null | transformers | 27,746 | Entry not found |
AnonymousSub/SR_declutr | 26ae0332551cc682a426f4a824efe6eefcde4851 | 2022-01-12T10:53:08.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_declutr | 1 | null | transformers | 27,747 | Entry not found |
AnonymousSub/SR_rule_based_roberta_hier_quadruplet_epochs_1_shard_10 | cdbd45e8cdc9456c8f94d79b6f3a1aa84fc1e621 | 2022-01-06T02:10:33.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_hier_quadruplet_epochs_1_shard_10 | 1 | null | transformers | 27,748 | Entry not found |
AnonymousSub/SR_rule_based_roberta_only_classfn_epochs_1_shard_1 | fe7f25c8cf57cbe6d3debb53930e43d4f19f5bb9 | 2022-01-06T09:57:07.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_only_classfn_epochs_1_shard_1 | 1 | null | transformers | 27,749 | Entry not found |
AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1 | d7260925c1f50cf0a8c4e45439590dc37649da96 | 2022-01-06T07:26:49.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1 | 1 | null | transformers | 27,750 | Entry not found |
AnonymousSub/SR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1 | c2f16a8f83ed73b98e56abd810430e62b37f6bab | 2022-01-06T10:54:28.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1 | 1 | null | transformers | 27,751 | Entry not found |
AnonymousSub/SR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10 | 7a0221de8daf3709a9c25ad586608c93c868bf05 | 2022-01-06T03:05:03.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10 | 1 | null | transformers | 27,752 | Entry not found |
AnonymousSub/SR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1 | b2a88d6f15591f74ecd896a7592e50abf212582d | 2022-01-06T09:20:04.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1 | 1 | null | transformers | 27,753 | Entry not found |
AnonymousSub/SR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10 | c34e3db9b07f1f6616546beae39aee7edab493fc | 2022-01-06T01:34:33.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10 | 1 | null | transformers | 27,754 | Entry not found |
AnonymousSub/SR_rule_based_twostagetriplet_hier_epochs_1_shard_1 | f8d3c97bc4d36642648303e06ff9b2e88ff3a0af | 2022-01-10T23:44:32.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_twostagetriplet_hier_epochs_1_shard_1 | 1 | null | transformers | 27,755 | Entry not found |
AnonymousSub/SR_specter | a119e780c0009e2e41ddcaf014c7411d5ce1515a | 2022-01-12T11:04:57.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_specter | 1 | null | transformers | 27,756 | Entry not found |
AnonymousSub/SciFive_pubmedqa_question_generation | 25c3bcb03664af0deebc264ebc1b6dce4d6ccc81 | 2022-01-06T12:17:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | AnonymousSub | null | AnonymousSub/SciFive_pubmedqa_question_generation | 1 | null | transformers | 27,757 | Entry not found |
AnonymousSub/bert_hier_diff_equal_wts_epochs_1_shard_1 | 54a462feffdd1e42c3b6f300a819ae7ff68e34d4 | 2021-12-22T16:55:55.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/bert_hier_diff_equal_wts_epochs_1_shard_1 | 1 | null | transformers | 27,758 | Entry not found |
AnonymousSub/bert_mean_diff_epochs_1_shard_1 | b7a4144c64b565b3ff7441c4bfc748f5cf5a03f9 | 2021-12-22T16:54:18.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/bert_mean_diff_epochs_1_shard_1 | 1 | null | transformers | 27,759 | Entry not found |
AnonymousSub/bert_mean_diff_epochs_1_shard_10 | b5bef73e4015b0efb6ebd58176cbeb431e7829ae | 2022-01-04T08:11:51.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/bert_mean_diff_epochs_1_shard_10 | 1 | null | transformers | 27,760 | Entry not found |
AnonymousSub/bert_snips | 22c3f2dd98dbb8672a8d9a9b07114e6a98a4ab0e | 2022-01-01T07:15:23.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/bert_snips | 1 | null | transformers | 27,761 | Entry not found |
AnonymousSub/cline-papers-biomed-0.618 | e72631e96b236f80eaf1e066ce2d3803b76d3496 | 2021-10-26T03:45:42.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | AnonymousSub | null | AnonymousSub/cline-papers-biomed-0.618 | 1 | null | transformers | 27,762 | Entry not found |
AnonymousSub/cline-papers-roberta-0.585 | 16a3ef9482295777b28c4882a6be201f63dbb50e | 2021-10-26T03:48:30.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | AnonymousSub | null | AnonymousSub/cline-papers-roberta-0.585 | 1 | null | transformers | 27,763 | Entry not found |
AnonymousSub/cline_emanuals | d732f01390cb48af1a2d4d54feabfe50bb7378ef | 2021-09-28T21:11:52.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | AnonymousSub | null | AnonymousSub/cline_emanuals | 1 | null | transformers | 27,764 | Entry not found |
AnonymousSub/declutr-emanuals-techqa | 16e610c48074e5f06b4d378680e812d9e3245e3b | 2021-09-30T19:15:25.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/declutr-emanuals-techqa | 1 | null | transformers | 27,765 | Entry not found |
AnonymousSub/declutr-model | 1a49723e09e85472d08304aeade175438c44b9bd | 2021-09-04T17:52:50.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | AnonymousSub | null | AnonymousSub/declutr-model | 1 | null | transformers | 27,766 | Entry not found |
AnonymousSub/dummy_2_parent | 78508eaa4b0df1228f7aebf58f51399201343234 | 2021-11-03T05:47:57.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/dummy_2_parent | 1 | null | transformers | 27,767 | Entry not found |
AnonymousSub/rule_based_bert_hier_diff_equal_wts_epochs_1_shard_10 | d25af6591d3f05f39fdd7298efd0df7075303b24 | 2022-01-04T08:17:58.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_bert_hier_diff_equal_wts_epochs_1_shard_10 | 1 | null | transformers | 27,768 | Entry not found |
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1 | 2211e91adae3e8a13fc4848c31df1e74f02ae10f | 2022-01-20T18:24:10.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1 | 1 | null | transformers | 27,769 | Entry not found |
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1_squad2.0 | b22f6db2268a638e0d4d797d140b006fb36c1686 | 2022-01-20T19:48:03.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1_squad2.0 | 1 | null | transformers | 27,770 | Entry not found |
AnonymousSub/rule_based_hier_quadruplet_0.1_epochs_1_shard_1 | c210ba8ed2db6d49c0492064a6ff02e59b3d259a | 2022-01-18T22:38:37.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_hier_quadruplet_0.1_epochs_1_shard_1 | 1 | null | transformers | 27,771 | Entry not found |
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1 | dbacd8b3c444f7b40e3f4c907a103551e21ad9d5 | 2022-01-20T18:23:25.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1 | 1 | null | transformers | 27,772 | Entry not found |
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_squad2.0 | f9fdfa990e517a37af48fb1f4996e2fd2f9a9c9c | 2022-01-20T20:43:23.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_squad2.0 | 1 | null | transformers | 27,773 | Entry not found |
AnonymousSub/rule_based_hier_triplet_0.1_epochs_1_shard_1 | bf25b81d4cd7c086818c854a6777a87af550e47a | 2022-01-18T22:39:23.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_hier_triplet_0.1_epochs_1_shard_1 | 1 | null | transformers | 27,774 | Entry not found |
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_wikiqa_copy | c16655efd908ec28139a8169cf3268b72f1862f8 | 2022-01-23T17:59:31.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_wikiqa_copy | 1 | null | transformers | 27,775 | Entry not found |
AnonymousSub/rule_based_only_classfn_epochs_1_shard_1_squad2.0 | f41e7e6afca97e4c073aab31d8f5735bbc05eeca | 2022-01-17T19:32:12.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_only_classfn_epochs_1_shard_1_squad2.0 | 1 | null | transformers | 27,776 | Entry not found |
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1_squad2.0 | 582a9f0ca9a19fcd5ed7d1a47a920243df0dc1aa | 2022-01-20T19:51:12.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1_squad2.0 | 1 | null | transformers | 27,777 | Entry not found |
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_wikiqa_copy | 5cf1541d638a988dfc6e2b39ad649db3665577b3 | 2022-01-23T17:33:16.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_wikiqa_copy | 1 | null | transformers | 27,778 | Entry not found |
AnonymousSub/rule_based_roberta_hier_quadruplet_0.1_epochs_1_shard_1 | f53db9f1e36ab20e2c4a44ad7a14d240f3b4987a | 2022-01-18T22:33:35.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_quadruplet_0.1_epochs_1_shard_1 | 1 | null | transformers | 27,779 | Entry not found |
AnonymousSub/rule_based_roberta_hier_quadruplet_0.1_epochs_1_shard_1_squad2.0 | 99508dc8baa15545cafa89e0d50ef7d1f37167b7 | 2022-01-19T02:14:58.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_quadruplet_0.1_epochs_1_shard_1_squad2.0 | 1 | null | transformers | 27,780 | Entry not found |
AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_1 | 1db868a4b7c8df5246911cf67f7833646ce3e148 | 2022-01-20T18:25:56.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_1 | 1 | null | transformers | 27,781 | Entry not found |
AnonymousSub/rule_based_roberta_hier_triplet_0.1_epochs_1_shard_1 | 70a7f8a340d181870f882c8213f15296da72446a | 2022-01-18T22:36:19.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_triplet_0.1_epochs_1_shard_1 | 1 | null | transformers | 27,782 | Entry not found |
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa_copy | 6424c8ded0a1568b5ec2c985a3fe50a22fe3e561 | 2022-01-23T16:34:51.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa_copy | 1 | null | transformers | 27,783 | Entry not found |
AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_1 | 56263e305c4b03222b5a3daf016aa5aa257c2376 | 2022-01-04T22:03:17.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_1 | 1 | null | transformers | 27,784 | Entry not found |
AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_1_squad2.0 | ab623607b3e02ba479b04e23638cc53801af9559 | 2022-01-17T21:32:09.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_1_squad2.0 | 1 | null | transformers | 27,785 | Entry not found |
AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_1 | 242d9382811378ada6304ed4bf965c3db61d385a | 2022-01-05T10:17:20.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_1 | 1 | null | transformers | 27,786 | Entry not found |
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_squad2.0 | 1db81207357bdf1f4cbdc2528e984527b50ece0b | 2022-01-18T04:52:51.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_squad2.0 | 1 | null | transformers | 27,787 | Entry not found |
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1 | 253f461fcbd74d4bccec0261c5b75f330b64af27 | 2022-01-10T21:10:20.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1 | 1 | null | transformers | 27,788 | Entry not found |
AnonymousSub/specter-bert-model_copy | 5d336bdc5d4d43e5ef5ffcac57fb3a42a7012929 | 2022-01-23T04:51:38.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/specter-bert-model_copy | 1 | null | transformers | 27,789 | Entry not found |
AnonymousSub/specter-bert-model_squad2.0 | c02c2e304a4899073c1a72449c9dae5cabb6865b | 2022-01-17T18:33:40.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/specter-bert-model_squad2.0 | 1 | null | transformers | 27,790 | Entry not found |
ArBert/albert-base-v2-finetuned-ner-agglo-twitter | 08569e2b95bf40fe5d8a2168200d3ce9791e32ce | 2022-02-12T09:09:50.000Z | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ArBert | null | ArBert/albert-base-v2-finetuned-ner-agglo-twitter | 1 | null | transformers | 27,791 | Entry not found |
ArBert/albert-base-v2-finetuned-ner-agglo | 7f33618a9d6f97f37047f5e9d044cbf233b67bf4 | 2022-02-12T11:30:15.000Z | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ArBert | null | ArBert/albert-base-v2-finetuned-ner-agglo | 1 | null | transformers | 27,792 | Entry not found |
ArBert/albert-base-v2-finetuned-ner-gmm | f875b11e09bb9a5c554894aab0fbc1e3d58a5254 | 2022-02-12T11:51:06.000Z | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ArBert | null | ArBert/albert-base-v2-finetuned-ner-gmm | 1 | null | transformers | 27,793 | Entry not found |
ArBert/albert-base-v2-finetuned-ner-kmeans-twitter | 7e27c8753a6d84f13a35a29c46130ab4cc129535 | 2022-02-12T08:00:32.000Z | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ArBert | null | ArBert/albert-base-v2-finetuned-ner-kmeans-twitter | 1 | null | transformers | 27,794 | Entry not found |
ArBert/albert-base-v2-finetuned-ner-kmeans | 86ad1237753c4ed85307f7a278e83f4b0851ff40 | 2022-02-12T11:10:41.000Z | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ArBert | null | ArBert/albert-base-v2-finetuned-ner-kmeans | 1 | null | transformers | 27,795 | Entry not found |
ArBert/roberta-base-finetuned-ner-agglo-twitter | 01a94157ee8409aa660d44dc216181a161725fe2 | 2022-02-12T11:40:08.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | ArBert | null | ArBert/roberta-base-finetuned-ner-agglo-twitter | 1 | null | transformers | 27,796 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: roberta-base-finetuned-ner-agglo-twitter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner-agglo-twitter
This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6645
- Precision: 0.6885
- Recall: 0.7665
- F1: 0.7254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 245 | 0.2820 | 0.6027 | 0.7543 | 0.6700 |
| No log | 2.0 | 490 | 0.2744 | 0.6308 | 0.7864 | 0.7000 |
| 0.2301 | 3.0 | 735 | 0.2788 | 0.6433 | 0.7637 | 0.6984 |
| 0.2301 | 4.0 | 980 | 0.3255 | 0.6834 | 0.7221 | 0.7022 |
| 0.1153 | 5.0 | 1225 | 0.3453 | 0.6686 | 0.7439 | 0.7043 |
| 0.1153 | 6.0 | 1470 | 0.3988 | 0.6797 | 0.7420 | 0.7094 |
| 0.0617 | 7.0 | 1715 | 0.4711 | 0.6702 | 0.7259 | 0.6969 |
| 0.0617 | 8.0 | 1960 | 0.4904 | 0.6904 | 0.7505 | 0.7192 |
| 0.0328 | 9.0 | 2205 | 0.5088 | 0.6591 | 0.7713 | 0.7108 |
| 0.0328 | 10.0 | 2450 | 0.5709 | 0.6468 | 0.7788 | 0.7067 |
| 0.019 | 11.0 | 2695 | 0.5570 | 0.6642 | 0.7533 | 0.7059 |
| 0.019 | 12.0 | 2940 | 0.5574 | 0.6899 | 0.7656 | 0.7258 |
| 0.0131 | 13.0 | 3185 | 0.5858 | 0.6952 | 0.7609 | 0.7265 |
| 0.0131 | 14.0 | 3430 | 0.6239 | 0.6556 | 0.7826 | 0.7135 |
| 0.0074 | 15.0 | 3675 | 0.5931 | 0.6825 | 0.7599 | 0.7191 |
| 0.0074 | 16.0 | 3920 | 0.6364 | 0.6785 | 0.7580 | 0.7161 |
| 0.005 | 17.0 | 4165 | 0.6437 | 0.6855 | 0.7580 | 0.7199 |
| 0.005 | 18.0 | 4410 | 0.6610 | 0.6779 | 0.7599 | 0.7166 |
| 0.0029 | 19.0 | 4655 | 0.6625 | 0.6853 | 0.7656 | 0.7232 |
| 0.0029 | 20.0 | 4900 | 0.6645 | 0.6885 | 0.7665 | 0.7254 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ArJakusz/DialoGPT-small-stark | e152eedfa6a9d60deb89219bc2eb8b22bce5dc07 | 2021-11-16T02:52:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ArJakusz | null | ArJakusz/DialoGPT-small-stark | 1 | null | transformers | 27,797 | ---
tags:
- conversational
---
# Stark DialoGPT Model |
Aran/DialoGPT-small-harrypotter | 32392e47f1eba53e2fa7eaa644e3d91e7b883481 | 2021-11-21T15:02:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Aran | null | Aran/DialoGPT-small-harrypotter | 1 | null | transformers | 27,798 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Arnold/wav2vec2-large-xlsr-hausa2-demo-colab | 3ca4066f24fcbd7c359bfb9027c83126f7878ab2 | 2022-02-14T23:42:35.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Arnold | null | Arnold/wav2vec2-large-xlsr-hausa2-demo-colab | 1 | null | transformers | 27,799 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-hausa2-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-hausa2-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2993
- Wer: 0.4826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.6e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 13
- gradient_accumulation_steps: 3
- total_train_batch_size: 36
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.1549 | 12.5 | 400 | 2.7289 | 1.0 |
| 2.0566 | 25.0 | 800 | 0.4582 | 0.6768 |
| 0.4423 | 37.5 | 1200 | 0.3037 | 0.5138 |
| 0.2991 | 50.0 | 1600 | 0.2993 | 0.4826 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.