modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
โ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
โ | likes
float64 0
712
โ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AvengingPrime/Reddit_Model_2 | ce35893dd0ae5edb3a13329671274a143286bbf1 | 2022-04-22T21:19:34.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | AvengingPrime | null | AvengingPrime/Reddit_Model_2 | 1 | null | transformers | 31,400 | Entry not found |
AntoDono/DialoGPT-Bopy-5k | 777a05b4d61993b5b52c8860763cb1389f6008b7 | 2022-04-23T05:22:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | AntoDono | null | AntoDono/DialoGPT-Bopy-5k | 1 | null | transformers | 31,401 | Entry not found |
negfir/bert_uncased_L-12_H-256_A-4wiki103 | 8801d169c1acd795c85e60e3f04a7df13178602f | 2022-04-23T09:07:55.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-12_H-256_A-4wiki103 | 1 | null | transformers | 31,402 | Entry not found |
jackh1995/bert-chinese-finetuned | b9ce6ecf449ac701ed7b80ebf496769b201d8ede | 2022-04-23T21:23:58.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | jackh1995 | null | jackh1995/bert-chinese-finetuned | 1 | null | transformers | 31,403 | Entry not found |
negfir/bert_uncased_L-12_H-128_A-2wiki103 | 0fd532469c8d51340a274588cd67b570c6497734 | 2022-04-25T17:17:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-12_H-128_A-2wiki103 | 1 | null | transformers | 31,404 | Entry not found |
adityay1221/Pixie.30.32 | 224515a20c8a471da7d2d81c37ede43c12fdc6a2 | 2022-04-23T11:55:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | adityay1221 | null | adityay1221/Pixie.30.32 | 1 | null | transformers | 31,405 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Pixie.30.32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Pixie.30.32
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1623
- Bleu: 47.6437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 121
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 9.09 | 100 | 1.5563 | 21.3462 |
| No log | 18.18 | 200 | 1.2493 | 29.2353 |
| No log | 27.27 | 300 | 1.1670 | 32.5700 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Wootang01/roberta-large-finetuned-hkdse-english-paper4 | 5a5c0f3829330e5f93b3e5c2d90bc147cc7de049 | 2022-04-23T14:01:05.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Wootang01 | null | Wootang01/roberta-large-finetuned-hkdse-english-paper4 | 1 | null | transformers | 31,406 | Entry not found |
allenai/aspire-biencoder-biomed-spec | 0bde92228e636dcbbdb43d01d4d2629ae969b471 | 2022-04-24T19:39:06.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2111.08366",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | allenai | null | allenai/aspire-biencoder-biomed-spec | 1 | null | transformers | 31,407 | ---
license: apache-2.0
---
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `Specter-CoCite_Spec` and represents a baseline bi-encoder for scientific document similarity. This model is similar in architecture to the [`allenai/specter`](https://github.com/allenai/specter) model but is trained on co-citation data instead of citation data.
## Model Card
### Model description
This model is a BERT bi-encoder model trained for similarity of title-abstract pairs in biomedical scientific papers. The model is **initialized with the SPECTER encoder**. This model inputs the title and abstract of a paper and represents it with a single vector obtained by a scalar mix of the CLS token at every layer of the base encoder. These scalar mix parameters can be important for performance in some datasets. Importantly, these scalar mix weights are not included as part of this HF model, if you wish to use these parameters please download the full model at: [`aspire-biencoder-biomed-spec-full.zip`](https://drive.google.com/file/d/1MDCv9Fc33eP015HTWKi50WYXixh72h5c/view?usp=sharing).
### Training data
The model is trained on pairs of co-cited papers in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers, for example - the papers in brackets below are all co-cited and each pairs title and abstracts would be used as a training pair:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 1e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for document similarity tasks in **biomedical** scientific text using a single vector per document. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as classification. Since the training data comes primarily from biomedicine, performance on other domains may be poorer.
### How to use
Follow instructions for use detailed on the model github repo: https://github.com/allenai/aspire#specter-cocite
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH (biomedical/English), and TRECCOVID (biomedical/English). These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts.
We rank documents by the L2 distance between the query and candidate documents.
### Evaluation results
The released model `aspire-biencoder-biomed-spec` (and `aspire-biencoder-biomed-spec-full`) is compared against `allenai/specter`. `aspire-biencoder-biomed-spec-full`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-biencoder-biomed-spec` and `aspire-biencoder-biomed-spec-full` are the single best run among the 3 re-runs.
| | TRECCOVID | TRECCOVID | RELISH | RELISH |
|-------------------------------------------:|:---------:|:-------:|:------:|:-------:|
| | MAP | NDCG%20 | MAP | NDCG%20 |
| `specter` | 28.24 | 59.28 | 60.62| 77.20 |
| `aspire-biencoder-biomed-spec-full`<sup>*</sup> | 28.59 | 60.07 | 61.43| 77.96 |
| `aspire-biencoder-biomed-spec` | 26.07 | 54.89 | 61.47| 78.34 |
| `aspire-biencoder-biomed-spec-full` | 28.87 | 60.47 | 61.69| 78.22 |
Note that the absence of linear mixing parameters in the `aspire-biencoder-biomed-spec` hurts performance substantially compared to `aspire-biencoder-biomed-spec-full` in TRECCOVID - this dataset contains a larger candidate set than RELISH (~9000 vs 60). Consider the more performant Alternative models below for usage.
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-biencoder-compsci-spec`](https://huggingface.co/allenai/aspire-biencoder-compsci-spec): If you wanted to run on computer science papers.
[`aspire-biencoder-biomed-scib`](https://huggingface.co/allenai/aspire-biencoder-biomed-scib): This is an alternative bi-encoder model identical to the above model, except that it is initialized with SciBERT instead of SPECTER. The above model underperforms this model, `allenai/aspire-biencoder-biomed-scib` (even better, `aspire-biencoder-biomed-scib-full`) is recommended for use. |
chrishuber/roberta-retrained-mlni | c1179fea0a7b3b9ef3e005ff7941cf1d8f01983b | 2022-04-23T17:28:08.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | chrishuber | null | chrishuber/roberta-retrained-mlni | 1 | null | transformers | 31,408 | Entry not found |
Raffay/org_speech_processing_project_wav2vec2 | d8c7637581be72d0412ba6a2f281fd583c030189 | 2022-04-23T20:44:14.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Raffay | null | Raffay/org_speech_processing_project_wav2vec2 | 1 | null | transformers | 31,409 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: org_speech_processing_project_wav2vec2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# org_speech_processing_project_wav2vec2
This model is a fine-tuned version of [kingabzpro/wav2vec2-urdu](https://huggingface.co/kingabzpro/wav2vec2-urdu) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
azizbarank/cst5-base | 810352b388fd9647963e8edf3f62e8c0acbef9ac | 2022-04-23T18:16:44.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | azizbarank | null | azizbarank/cst5-base | 1 | null | transformers | 31,410 | ---
license: mit
---
## The T5 base model for the Czech Language
This is the t5 base model for the Czech language that is based on the smaller version of the google/mt5-base model (https://huggingface.co/google/mt5-base).
To make this model, I retained only the Czech and some of the English embeddings from the original multilingual model.
# Modifications to the original multilingual t5 base model:
1- Parameters of the original model were reduced from 582M to 244M parameters.
2- By choosing the top 20K Czech and 10K English tokens, sentencepiece vocabulary was shrinked from 250K to 30K tokens.
3- The original size was reduced from 2.2GB to 0.9GB.
Notes:
Since this is the base t5 model of the Czech language, before using it for any downstream tasks, it needs to be finetuned with appropriate datasets in the first place.
References:
The substantial amount of this work to create this model is mostly based on the the post written by David Dale: "How to adapt a multilingual T5 model for a single language" (https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) |
Coma/Beter | 8430bd6aae622b81366aaa48210a69bfae1e8a56 | 2022-04-23T20:02:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Coma | null | Coma/Beter | 1 | null | transformers | 31,411 | ---
tags:
- conversational
---
#Peter from Your Boyfriend Game |
Reproducibility/naacl22_causalDistilBERT_instance_2 | d5c414b9b567e8403018fe0a8a0b354ffec2de2f | 2022-04-23T19:55:55.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Reproducibility | null | Reproducibility/naacl22_causalDistilBERT_instance_2 | 1 | null | transformers | 31,412 | Entry not found |
Reproducibility/naacl22_causalDistilBERT_instance_3 | e45f5a2a29179660134c33583665eb5ffda772ec | 2022-04-23T20:00:12.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Reproducibility | null | Reproducibility/naacl22_causalDistilBERT_instance_3 | 1 | null | transformers | 31,413 | Entry not found |
smeoni/nbme-electra-large-discriminator | d5f19ee3b5cbc2cf213fcff6fe7662d840d3d261 | 2022-04-23T21:44:16.000Z | [
"pytorch",
"tensorboard",
"electra",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | smeoni | null | smeoni/nbme-electra-large-discriminator | 1 | null | transformers | 31,414 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: nbme-electra-large-discriminator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nbme-electra-large-discriminator
This model is a fine-tuned version of [google/electra-large-discriminator](https://huggingface.co/google/electra-large-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.1704 | 1.0 | 1850 | 6.1313 |
| 6.1305 | 2.0 | 3700 | 6.1243 |
| 6.1109 | 3.0 | 5550 | 6.1201 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dllllb/poetnet-mt5-stihiru-libru | 4a63390271c34e8a99e1dad65118c26ad2e04c6f | 2022-04-23T23:13:12.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | dllllb | null | dllllb/poetnet-mt5-stihiru-libru | 1 | null | transformers | 31,415 | Entry not found |
Lucifermorningstar011/autotrain-ner-778023879 | 21de8637c406c6cd04e5f1224d4b6663fb03bc71 | 2022-04-24T00:00:13.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:Lucifermorningstar011/autotrain-data-ner",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | Lucifermorningstar011 | null | Lucifermorningstar011/autotrain-ner-778023879 | 1 | null | transformers | 31,416 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- Lucifermorningstar011/autotrain-data-ner
co2_eq_emissions: 43.26533004662002
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 778023879
- CO2 Emissions (in grams): 43.26533004662002
## Validation Metrics
- Loss: 5.475859779835446e-06
- Accuracy: 0.9999996519918594
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Lucifermorningstar011/autotrain-ner-778023879
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Lucifermorningstar011/autotrain-ner-778023879", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Lucifermorningstar011/autotrain-ner-778023879", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
dllllb/poetnet-rut5-stihiru-libru-finetune | 0382a41b9ad84fa37bebe282a9777cd7d1bfb67d | 2022-04-24T00:53:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | dllllb | null | dllllb/poetnet-rut5-stihiru-libru-finetune | 1 | null | transformers | 31,417 | Entry not found |
aiface/5500 | 083d6694bb1ea399bfdaeab4092a8977fbb26cfb | 2022-04-24T07:26:20.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | aiface | null | aiface/5500 | 1 | null | transformers | 31,418 | Entry not found |
jackh1995/roberta-base-chinese-extractive-qa | 7399394b7e3ca409da021fbee6cc39fa0f67b907 | 2022-04-24T09:49:31.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | jackh1995 | null | jackh1995/roberta-base-chinese-extractive-qa | 1 | null | transformers | 31,419 | Entry not found |
MachineBabs/RickBot | a58722669ace86eecfdc5494a024c82b65c7ff1e | 2022-04-24T09:36:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | MachineBabs | null | MachineBabs/RickBot | 1 | null | transformers | 31,420 | ---
tags:
- conversational
---
|
smeoni/nbme-gpt2 | 31f002679c535b2c306b6104ed978fae094033b0 | 2022-04-24T11:02:07.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | smeoni | null | smeoni/nbme-gpt2 | 1 | null | transformers | 31,421 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: nbme-gpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nbme-gpt2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3684
- Accuracy: 0.5070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.99 | 101 | 2.5636 | 0.4809 |
| No log | 1.99 | 202 | 2.4075 | 0.5018 |
| No log | 2.99 | 303 | 2.3684 | 0.5070 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
spuun/kekbot-beta-1-medium | 29b5bc4d4bb8ff96aa827e635dd4c301fb42cf85 | 2022-04-24T23:40:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"conversational",
"license:cc-by-nc-sa-4.0",
"co2_eq_emissions"
] | conversational | false | spuun | null | spuun/kekbot-beta-1-medium | 1 | null | transformers | 31,422 | ---
language:
- en
tags:
- conversational
co2_eq_emissions:
emissions: "370"
source: "mlco2.github.io"
training_type: "fine-tuning"
geographical_location: "West Java, Indonesia"
hardware_used: "1 Tesla P100"
license: cc-by-nc-sa-4.0
widget:
- text: "Hey kekbot! What's up?"
example_title: "Asking what's up"
- text: "Hey kekbot! How r u?"
example_title: "Asking how he is"
---
> THIS MODEL IS IN PUBLIC BETA, PLEASE DO NOT EXPECT ANY FORM OF STABILITY IN ITS CURRENT STATE.
# Art Union server chatbot
Based on a DialoGPT-medium model, fine-tuned to a small subset (52k< messages) of Art Union's general-chat channel.
### Current issues
(Which hopefully will be fixed in future iterations) Include, but not limited to:
- Limited turns, after ~11 turns output may break for no apparent reason.
- Inconsistent variance, acts like an overfitted model from time to time for no reason whatsoever. |
macavaney/monot5-base-msmarco-sim1 | 62e767c3599593208da35a048487c6382566214a | 2022-04-24T15:27:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | macavaney | null | macavaney/monot5-base-msmarco-sim1 | 1 | null | transformers | 31,423 | Entry not found |
tosin/dialogpt_afriwoz_wolof | 90e9b80708cfb736ace96eee4b50febfbc012f6a | 2022-04-25T06:45:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:multi_woz_v22 and AfriWOZ",
"arxiv:2204.08083",
"transformers",
"conversational",
"license:cc-by-4.0"
] | conversational | false | tosin | null | tosin/dialogpt_afriwoz_wolof | 1 | null | transformers | 31,424 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
language:
- en
license: cc-by-4.0
tags:
- conversational
- transformers
datasets:
- multi_woz_v22 and AfriWOZ
metrics:
- perplexity
widget:
- text: "dedet li rek la soxla. jerejef. ba benen yoon."
---
## DialoGPT_AfriWOZ
This is a fine-tuned model of DialoGPT (small) on the AfriWOZ dataset. It is intended to be used as a conversational system in Wolof language.
The dataset it's trained on is limited in scope, as it covers only certain domains such as restaurants, hotel, taxi, and booking.
The perplexity achieved on the test and validation sets are 20.74 and 11.66, respectively.
* Generation example from an interactive environment:
|Role | Response |
|---------|------------|
|User | sowwu bi la bรซg |
|Bot | waw, ndax mรซn ngan ma jox ay leeral si bรซrรซb buรฑu feetee ak lekkukaay bi |
|User | soxla na dem gawu. ngir leeral rek bรซg na joggee peterborough di dem cambridge |
|Bot | soxlu na dem ginaw jurrom รฑenti waxtu ak รฑaar fukki simili ak jurrom ben. |
|User | waw jiss sal ma ben bu seer si njรซg si toluway |
|Bot | jรซnd bi baaxna. jerejef. ba benen yoon. |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
The paper for this work can be found on arXiv: [https://arxiv.org/pdf/2204.08083.pdf](https://arxiv.org/pdf/2204.08083.pdf)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_afriwoz_wolof")
model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_afriwoz_wolof")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT_wolof_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
shiyue/wav2vec2-large-xlsr-53-chr-phonetic | 78e04a329e502593ddbe5c064a00e6d880a15d9d | 2022-04-24T17:40:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | shiyue | null | shiyue/wav2vec2-large-xlsr-53-chr-phonetic | 1 | null | transformers | 31,425 | Entry not found |
shiyue/wav2vec2-large-xlsr-53-chr-syllabary | cea715d42a782338b96a233727b555b6399fe90d | 2022-04-24T17:52:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | shiyue | null | shiyue/wav2vec2-large-xlsr-53-chr-syllabary | 1 | null | transformers | 31,426 | Entry not found |
umarkhalid96/t5-small-train | 370f95c03fc27f04c1a7ce504b0332651c62dbf4 | 2022-04-29T12:36:08.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | umarkhalid96 | null | umarkhalid96/t5-small-train | 1 | null | transformers | 31,427 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-train
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2669
- Rouge1: 43.2372
- Rouge2: 21.6755
- Rougel: 38.1637
- Rougelsum: 38.5444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.2032 | 1.0 | 45 | 2.6305 | 34.393 | 15.4821 | 30.3601 | 30.5865 |
| 2.6291 | 2.0 | 90 | 2.4169 | 38.2327 | 18.4622 | 34.2887 | 34.3385 |
| 2.4294 | 3.0 | 135 | 2.3395 | 40.4405 | 19.927 | 36.559 | 36.8095 |
| 2.3191 | 4.0 | 180 | 2.3059 | 41.4214 | 20.4534 | 36.6399 | 36.9088 |
| 2.2949 | 5.0 | 225 | 2.2857 | 42.6906 | 21.1492 | 37.5557 | 37.8722 |
| 2.2591 | 6.0 | 270 | 2.2762 | 43.1598 | 21.6179 | 38.1235 | 38.5053 |
| 2.1722 | 7.0 | 315 | 2.2680 | 43.4447 | 21.8048 | 38.4077 | 38.7384 |
| 2.1993 | 8.0 | 360 | 2.2669 | 43.2372 | 21.6755 | 38.1637 | 38.5444 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
chrishuber/roberta-retrained-kaggledev | d739e3f0be0a5c4b44fbbbaea65853e719617945 | 2022-04-24T20:05:12.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | chrishuber | null | chrishuber/roberta-retrained-kaggledev | 1 | null | transformers | 31,428 | Entry not found |
Nadhiya/distilbert-base-uncased-finetuned-squad | 8a188aebc04cb8784d0be4a90a37d26d3643bc2d | 2022-04-29T18:20:29.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Nadhiya | null | Nadhiya/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 31,429 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 54 | 5.8535 |
| No log | 2.0 | 108 | 6.4469 |
| No log | 3.0 | 162 | 6.6023 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
akashsingh123/wav2vec2-base-timit-demo-colab | a0cda0f22c69d80c24bc281a1a725b455ba3cac4 | 2022-04-24T23:12:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | akashsingh123 | null | akashsingh123/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 31,430 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
lsb/wav2vec2-base-pemlsb-la2 | bdb57ac054cc8d50510cd0f5682e0f7497a986cf | 2022-04-26T14:56:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lsb | null | lsb/wav2vec2-base-pemlsb-la2 | 1 | null | transformers | 31,431 | Entry not found |
aakhilv/tonystark | 7e565dd02985e1eee94f4f2472e3639c566d8796 | 2022-04-25T01:58:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | aakhilv | null | aakhilv/tonystark | 1 | null | transformers | 31,432 | ---
tags:
- conversational
---
# Tony Stark DialoGPT Model |
PSW/random_sim_del_seed27 | 46b7ce3053b8fadcd0db5d6e9298ce384d592ddb | 2022-04-25T05:27:50.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_del_seed27 | 1 | null | transformers | 31,433 | Entry not found |
LordOfTheSheep/DialoGPT-small-AngelDust | 326b07696acc89dbe80a832634ea54808174b9de | 2022-04-25T07:28:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | LordOfTheSheep | null | LordOfTheSheep/DialoGPT-small-AngelDust | 1 | null | transformers | 31,434 | Entry not found |
PSW/random_sim_ins_seed27 | efacf208dc0a400ec7d74b1529ca867cd60de649 | 2022-04-25T08:43:26.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_ins_seed27 | 1 | null | transformers | 31,435 | Entry not found |
maryam359/wav2vec-speech-project | 74015e842877fbba6fb5a856f767b3cc20cdb57b | 2022-04-25T12:31:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | maryam359 | null | maryam359/wav2vec-speech-project | 1 | null | transformers | 31,436 | ---
tags:
- generated_from_trainer
model-index:
- name: wav2vec-speech-project
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-speech-project
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
abhiGOAT/wav2vec2-large-xls-r-300m-turkish-colab | a9f9486b4c54d534b24e957708dbc8428b2891c8 | 2022-04-25T12:45:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | abhiGOAT | null | abhiGOAT/wav2vec2-large-xls-r-300m-turkish-colab | 1 | null | transformers | 31,437 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
kittisak612/bias-tagger | a9afe45ad938f0bbeaba8d470ea27d3987371154 | 2022-04-25T11:29:19.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kittisak612 | null | kittisak612/bias-tagger | 1 | null | transformers | 31,438 | Entry not found |
PSW/min_sim_ins_seed1 | d1374c3db7a8f0263907cfdc1985db8881703288 | 2022-04-25T12:50:34.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/min_sim_ins_seed1 | 1 | null | transformers | 31,439 | Entry not found |
CarlCochet/trajectory-transformer-ant-expert-v2 | 6f3b87538892ebf583aa92bb5607605b1257215b | 2022-05-12T16:55:36.000Z | [
"pytorch",
"trajectory_transformer",
"feature-extraction",
"transformers",
"license:mit"
] | feature-extraction | false | CarlCochet | null | CarlCochet/trajectory-transformer-ant-expert-v2 | 1 | null | transformers | 31,440 | ---
license: mit
---
|
PSW/min_sim_ins_seed42 | 40c709d725f12eb195db90e742cb62f690cd7f21 | 2022-04-25T13:46:24.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/min_sim_ins_seed42 | 1 | null | transformers | 31,441 | Entry not found |
PSW/half_sim_ins_seed1 | 28d7d20a436ee0d3d8ddfa5c68e5ed88d228c3f9 | 2022-04-25T14:31:06.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/half_sim_ins_seed1 | 1 | null | transformers | 31,442 | Entry not found |
Lucifermorningstar011/autotrain-final-784824218 | 7fd428ee20d2c331f0c740680a1695322cc2a8fa | 2022-04-25T17:44:20.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:Lucifermorningstar011/autotrain-data-final",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | Lucifermorningstar011 | null | Lucifermorningstar011/autotrain-final-784824218 | 1 | null | transformers | 31,443 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- Lucifermorningstar011/autotrain-data-final
co2_eq_emissions: 237.58504390669626
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 784824218
- CO2 Emissions (in grams): 237.58504390669626
## Validation Metrics
- Loss: 0.2379177361726761
- Accuracy: 0.9734973172736223
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Lucifermorningstar011/autotrain-final-784824218
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Lucifermorningstar011/autotrain-final-784824218", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Lucifermorningstar011/autotrain-final-784824218", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Lucifermorningstar011/autotrain-final-784824209 | 5df23c332da5d7cb4584844837ebdf6895515d66 | 2022-04-25T17:32:25.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:Lucifermorningstar011/autotrain-data-final",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | Lucifermorningstar011 | null | Lucifermorningstar011/autotrain-final-784824209 | 1 | null | transformers | 31,444 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- Lucifermorningstar011/autotrain-data-final
co2_eq_emissions: 0.8282546197737336
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 784824209
- CO2 Emissions (in grams): 0.8282546197737336
## Validation Metrics
- Loss: 0.18077287077903748
- Accuracy: 0.9639925673427913
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Lucifermorningstar011/autotrain-final-784824209
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Lucifermorningstar011/autotrain-final-784824209", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Lucifermorningstar011/autotrain-final-784824209", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
robinhad/data2vec-large-uk | d2b2ff787a49d5a2b7ced8246dc3e7afb3e0c391 | 2022-04-25T17:27:44.000Z | [
"pytorch",
"tensorboard",
"data2vec-audio",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | robinhad | null | robinhad/data2vec-large-uk | 1 | 2 | transformers | 31,445 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: data2vec-large-uk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-large-uk
This model is a fine-tuned version of [facebook/data2vec-audio-large-960h](https://huggingface.co/facebook/data2vec-audio-large-960h) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3472
- eval_wer: 0.3410
- eval_cer: 0.0832
- eval_runtime: 231.0008
- eval_samples_per_second: 25.108
- eval_steps_per_second: 3.139
- epoch: 33.06
- step: 20400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 1.18.3
- Tokenizers 0.12.1
|
obokkkk/wav2vec2-base-960h-finetuned_common_voice | 37ce5b3304f0936e83df522008f3a5e5a686ab1b | 2022-04-27T05:15:02.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | obokkkk | null | obokkkk/wav2vec2-base-960h-finetuned_common_voice | 1 | null | transformers | 31,446 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-960h-finetuned_common_voice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-960h-finetuned_common_voice
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
cj-mills/codeparrot-small | e5476e62ce51ed78a6650c8e461a187efba8438b | 2022-04-25T23:09:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | cj-mills | null | cj-mills/codeparrot-small | 1 | null | transformers | 31,447 | Entry not found |
huggingtweets/unbridledbot | b5f5d60045048d8e0fb35c5d40b61ffe5af8507d | 2022-04-25T20:48:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/unbridledbot | 1 | null | transformers | 31,448 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1517600518167842816/OIgwXfB-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">unbridled_id_bot</div>
<div style="text-align: center; font-size: 14px;">@unbridledbot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from unbridled_id_bot.
| Data | unbridled_id_bot |
| --- | --- |
| Tweets downloaded | 62 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 62 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1cq0nyq4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @unbridledbot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bj4mq8d) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bj4mq8d/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/unbridledbot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nizamudma/t5-small-finetuned-cnn-2 | 7477b2e67cc66ec17fed6c5c01960a04a6ee7634 | 2022-04-26T22:05:50.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nizamudma | null | nizamudma/t5-small-finetuned-cnn-2 | 1 | null | transformers | 31,449 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.5085
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6620
- Rouge1: 24.5085
- Rouge2: 11.7925
- Rougel: 20.2631
- Rougelsum: 23.1253
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8435 | 1.0 | 35890 | 1.6753 | 24.5387 | 11.7851 | 20.2792 | 23.1595 | 18.999 |
| 1.8143 | 2.0 | 71780 | 1.6660 | 24.5268 | 11.7976 | 20.2699 | 23.1384 | 18.9996 |
| 1.816 | 3.0 | 107670 | 1.6620 | 24.5085 | 11.7925 | 20.2631 | 23.1253 | 18.9996 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
negfir/bert_uncased_L-10_H-768_A-12wiki103 | a523be452b3c6a0dcc71b55ded3e387dbadf3e80 | 2022-04-25T22:06:59.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-10_H-768_A-12wiki103 | 1 | null | transformers | 31,450 | Entry not found |
PSW/min_sim_ins_seed27 | 7f632205214c15fad02ae39f649f9ffe09aefc38 | 2022-04-26T01:37:58.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/min_sim_ins_seed27 | 1 | null | transformers | 31,451 | Entry not found |
Ghost1/distilbert-base-uncased-finetuned-imdb-accelerate | 9b73a42ae5e655a212aa08bc191252220b37669d | 2022-04-26T01:43:57.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Ghost1 | null | Ghost1/distilbert-base-uncased-finetuned-imdb-accelerate | 1 | null | transformers | 31,452 | Entry not found |
Jonesy/DialoGPT-small_FG | 4112f80d3bc3de3c86d348911e647ddc013b94cd | 2022-04-26T15:23:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Jonesy | null | Jonesy/DialoGPT-small_FG | 1 | null | transformers | 31,453 | ---
tags:
- conversational
---
# Family Guy DialoGPT Model v2
|
negfir/bert_uncased_L-10_H-512_A-8wiki103 | 639bfadcbaef1b993f899923e896fa68620a079c | 2022-04-26T04:24:58.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-10_H-512_A-8wiki103 | 1 | null | transformers | 31,454 | Entry not found |
yellowjs0304/lmv2large | fd2b875cc59352933aba966cd7c4bb720567b915 | 2022-04-26T05:42:50.000Z | [
"pytorch",
"layoutlmv2",
"en",
"arxiv:2012.14740",
"transformers",
"license:cc-by-nc-sa-4.0"
] | null | false | yellowjs0304 | null | yellowjs0304/lmv2large | 1 | null | transformers | 31,455 | ---
language: en
license: cc-by-nc-sa-4.0
---
# LayoutLMv2
**Multimodal (text + layout/format + image) pre-training for document AI**
## Introduction
LayoutLMv2 is an improved version of LayoutLM with new pre-training tasks to model the interaction among text, layout, and image in a single multi-modal framework. It outperforms strong baselines and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including , including FUNSD (0.7895 โ 0.8420), CORD (0.9493 โ 0.9601), SROIE (0.9524 โ 0.9781), Kleister-NDA (0.834 โ 0.852), RVL-CDIP (0.9443 โ 0.9564), and DocVQA (0.7295 โ 0.8672).
[LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740)
Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou, [ACL 2021](#)
|
MSLars/t5-base-ace_en_p_pretrained | d3986c5a4c557aff23e224fca07c7c2b9725ae6f | 2022-04-26T08:16:48.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | MSLars | null | MSLars/t5-base-ace_en_p_pretrained | 1 | null | transformers | 31,456 | Entry not found |
DioLiu/distilroberta-base-horror_shake_head | 77ad63cfefc534c4ae97e176b2321567c96ad4be | 2022-04-26T08:39:43.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | DioLiu | null | DioLiu/distilroberta-base-horror_shake_head | 1 | null | transformers | 31,457 | Entry not found |
ntoldalagi/nick_asr_COMBO_v2 | 009b05e4e2f515819fec791665dfacee92919790 | 2022-05-03T11:08:04.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"transformers",
"generated_from_trainer",
"model-index"
] | null | false | ntoldalagi | null | ntoldalagi/nick_asr_COMBO_v2 | 1 | null | transformers | 31,458 | ---
tags:
- generated_from_trainer
model-index:
- name: nick_asr_COMBO_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nick_asr_COMBO_v2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4474
- Wer: 0.6535
- Cer: 0.2486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.3049 | 1.0 | 687 | 1.5013 | 0.7015 | 0.2607 |
| 0.2294 | 2.0 | 1374 | 1.5933 | 0.6693 | 0.2612 |
| 0.261 | 3.0 | 2061 | 1.6275 | 0.6985 | 0.2687 |
| 0.2658 | 4.0 | 2748 | 1.5568 | 0.6729 | 0.2581 |
| 0.1704 | 5.0 | 3435 | 1.5363 | 0.6650 | 0.2529 |
| 0.2537 | 6.0 | 4122 | 1.5764 | 0.6669 | 0.2542 |
| 0.2333 | 7.0 | 4809 | 1.5285 | 0.6596 | 0.2519 |
| 0.168 | 8.0 | 5496 | 1.4945 | 0.6571 | 0.2500 |
| 0.3263 | 9.0 | 6183 | 1.4968 | 0.6547 | 0.2510 |
| 0.3238 | 10.0 | 6870 | 1.4474 | 0.6535 | 0.2486 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
hbruce11216/distilbert-base-uncased-finetuned-imdb | 9759ea94a84775f18dc890e3f3b91e9c1387b9d8 | 2022-04-26T13:56:22.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | hbruce11216 | null | hbruce11216/distilbert-base-uncased-finetuned-imdb | 1 | null | transformers | 31,459 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Isobutylcyclopentane/2022-143326-finetuned-eurosat | cb2fac01574227fd9d325921fa91b8bbe1dc69b3 | 2022-04-26T17:05:48.000Z | [
"pytorch",
"tensorboard",
"perceiver",
"image-classification",
"transformers"
] | image-classification | false | Isobutylcyclopentane | null | Isobutylcyclopentane/2022-143326-finetuned-eurosat | 1 | null | transformers | 31,460 | Entry not found |
charityking2358/taglish-electra-20K | 647abd9b95fa3bd9a86787ac3743c825d2a20496 | 2022-04-26T14:50:31.000Z | [
"pytorch",
"transformers"
] | null | false | charityking2358 | null | charityking2358/taglish-electra-20K | 1 | null | transformers | 31,461 | Entry not found |
Jonesy/DialoGPT-medium_FG | c2f88464410db78cfc8af0ef4dfe650dba94c511 | 2022-04-26T17:38:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Jonesy | null | Jonesy/DialoGPT-medium_FG | 1 | null | transformers | 31,462 | ---
tags:
- conversational
---
# Family Guy DialoGPT Model v3 (Medium output)
|
charityking2358/taglish-electra-25K | 7b46aba57ee009edf823e679b73526cb3279f478 | 2022-04-27T16:07:20.000Z | [
"pytorch",
"transformers"
] | null | false | charityking2358 | null | charityking2358/taglish-electra-25K | 1 | null | transformers | 31,463 | Entry not found |
Amrendra/roberta-tapt-acl-arc | 06cbad9d25b879672657a030b7456e6ea5dc79dc | 2022-04-26T18:28:54.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Amrendra | null | Amrendra/roberta-tapt-acl-arc | 1 | null | transformers | 31,464 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-tapt-acl-arc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-tapt-acl-arc
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 89 | 2.6476 |
| No log | 2.0 | 178 | 2.7191 |
| No log | 3.0 | 267 | 2.4195 |
| No log | 4.0 | 356 | 2.4680 |
| No log | 5.0 | 445 | 2.3363 |
| 2.5791 | 6.0 | 534 | 2.1846 |
| 2.5791 | 7.0 | 623 | 2.0593 |
| 2.5791 | 8.0 | 712 | 1.9373 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
renjithks/expense-ner | 4c3ba716c5c94eec2f0f65cd980fd73c7df0825c | 2022-04-26T18:28:41.000Z | [
"pytorch",
"layoutlmv2",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | renjithks | null | renjithks/expense-ner | 1 | null | transformers | 31,465 | Model for itemisation fo receipts |
PSW/random_sim_ins2_seed1 | c9d6576ad8a683176891f7e27a4b45f304f95c5b | 2022-04-27T02:27:13.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_ins2_seed1 | 1 | null | transformers | 31,466 | Entry not found |
PSW/random_sim_ins2_seed42 | 3b1b64e9f49d9b3e38909e50e0ab5e6508357827 | 2022-04-27T04:21:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_ins2_seed42 | 1 | null | transformers | 31,467 | Entry not found |
0x12/t5small-opus_infopankki-en-zh | 87eb8e3e767c13b811a555253fb11ce90a8046c3 | 2022-04-27T06:23:53.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:opus_infopankki",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | 0x12 | null | 0x12/t5small-opus_infopankki-en-zh | 1 | null | transformers | 31,468 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_infopankki
model-index:
- name: t5small-opus_infopankki-en-zh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5small-opus_infopankki-en-zh
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_infopankki dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.0853 | 1.0 | 1496 | 2.7074 |
| 2.8378 | 2.0 | 2992 | 2.5717 |
| 2.7637 | 3.0 | 4488 | 2.4829 |
| 2.6622 | 4.0 | 5984 | 2.4156 |
| 2.5986 | 5.0 | 7480 | 2.3649 |
| 2.5488 | 6.0 | 8976 | 2.3184 |
| 2.486 | 7.0 | 10472 | 2.2808 |
| 2.4566 | 8.0 | 11968 | 2.2485 |
| 2.4413 | 9.0 | 13464 | 2.2181 |
| 2.3806 | 10.0 | 14960 | 2.1939 |
| 2.3741 | 11.0 | 16456 | 2.1711 |
| 2.3419 | 12.0 | 17952 | 2.1511 |
| 2.3197 | 13.0 | 19448 | 2.1318 |
| 2.3229 | 14.0 | 20944 | 2.1170 |
| 2.2885 | 15.0 | 22440 | 2.1032 |
| 2.2781 | 16.0 | 23936 | 2.0908 |
| 2.2447 | 17.0 | 25432 | 2.0792 |
| 2.2589 | 18.0 | 26928 | 2.0695 |
| 2.2274 | 19.0 | 28424 | 2.0611 |
| 2.2311 | 20.0 | 29920 | 2.0538 |
| 2.2263 | 21.0 | 31416 | 2.0482 |
| 2.2066 | 22.0 | 32912 | 2.0443 |
| 2.2042 | 23.0 | 34408 | 2.0413 |
| 2.211 | 24.0 | 35904 | 2.0390 |
| 2.1952 | 25.0 | 37400 | 2.0385 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Wikidepia/byt5-sentfix | de5f1a880937fec1f205da48547e4ccab4ed02a3 | 2022-04-27T06:52:59.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Wikidepia | null | Wikidepia/byt5-sentfix | 1 | null | transformers | 31,469 | Entry not found |
dannytkn/bert-finetuned-squad | 4b98a4b703c43a9884f182fd95944d5c11a971ce | 2022-04-28T20:12:13.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | dannytkn | null | dannytkn/bert-finetuned-squad | 1 | null | transformers | 31,470 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.2
- Datasets 1.18.3
- Tokenizers 0.10.3
|
nz/RITA_m | 7c39abb7043f627ec84d2594099e854a323ddadb | 2022-04-27T16:30:27.000Z | [
"pytorch",
"codegen",
"transformers"
] | null | false | nz | null | nz/RITA_m | 1 | null | transformers | 31,471 | Entry not found |
nz/RITA_l | 1ee6ba4e03372848d009b250574977e688641531 | 2022-04-27T16:30:09.000Z | [
"pytorch",
"rita",
"transformers"
] | null | false | nz | null | nz/RITA_l | 1 | null | transformers | 31,472 | Entry not found |
PSW/random_sim_swap_seed27 | 8e77228b3c494b94850d692a83dda7943341a258 | 2022-04-27T10:46:19.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_swap_seed27 | 1 | null | transformers | 31,473 | Entry not found |
emr-se-miniproject/roberta-base-emr | 9eff0e48640010c4ad3c1df17dde3cfb6243fc31 | 2022-04-27T11:17:30.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | emr-se-miniproject | null | emr-se-miniproject/roberta-base-emr | 1 | null | transformers | 31,474 | |
PSW/random_sim_swap_seed42 | df82c19f4014a516003e87371486b7c527a52bf1 | 2022-04-27T11:43:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_swap_seed42 | 1 | null | transformers | 31,475 | Entry not found |
ahmad573/wav2vec2-base-timit-demo-colab | 69c71d3d1bdf48d5ce71b4260887c14d4113ed72 | 2022-04-30T15:09:32.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ahmad573 | null | ahmad573/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 31,476 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5827
- Wer: 0.4147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4314 | 7.04 | 500 | 0.5453 | 0.4922 |
| 0.2357 | 14.08 | 1000 | 0.5573 | 0.4376 |
| 0.1283 | 21.13 | 1500 | 0.5827 | 0.4147 |
| 0.1169 | 28.17 | 2000 | 0.5827 | 0.4147 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
PSW/minmax_sim_swap_seed1 | c16d97f1fef7a4fe42863a256bd0efbe49f31bc7 | 2022-04-27T12:40:25.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/minmax_sim_swap_seed1 | 1 | null | transformers | 31,477 | Entry not found |
ia/segformer-finetuned-sidewalk-10k-steps | d2c8a5a6022a425a66179f26adac38b5e238249b | 2022-04-29T00:01:00.000Z | [
"pytorch",
"tensorboard",
"segformer",
"transformers"
] | null | false | ia | null | ia/segformer-finetuned-sidewalk-10k-steps | 1 | null | transformers | 31,478 | Entry not found |
PSW/minmax_sim_swap_seed27 | 6dc350ee4d895acfe5b4d9f62ff7530e522f00f2 | 2022-04-27T13:38:27.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/minmax_sim_swap_seed27 | 1 | null | transformers | 31,479 | Entry not found |
tau/False_large_pmi_para0_sent1_span2_True_multi_masks_with_types_enum_7_1024_0.3_epoch1 | 0f772c93fc8762c7806b96cd6a3f9981812ce15f | 2022-04-27T13:40:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_pmi_para0_sent1_span2_True_multi_masks_with_types_enum_7_1024_0.3_epoch1 | 1 | null | transformers | 31,480 | Entry not found |
PSW/minmax_sim_swap_seed42 | d37602e487f2261169bd67964827148cfba0d7b0 | 2022-04-27T14:41:41.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/minmax_sim_swap_seed42 | 1 | null | transformers | 31,481 | Entry not found |
kvnaraya/DialoGPT-small-kevin | fd3045ef247f1e9b5172335d6dae155fd791ec6b | 2022-04-27T15:04:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | kvnaraya | null | kvnaraya/DialoGPT-small-kevin | 1 | null | transformers | 31,482 | Entry not found |
Das282000Prit/bert-base-uncased-finetuned-wikitext2 | adb40dc31211d1ae4dcdb601a5b10d2b62b6379a | 2022-04-27T16:11:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Das282000Prit | null | Das282000Prit/bert-base-uncased-finetuned-wikitext2 | 1 | null | transformers | 31,483 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wikitext2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9288 | 1.0 | 2319 | 1.7729 |
| 1.8208 | 2.0 | 4638 | 1.7398 |
| 1.7888 | 3.0 | 6957 | 1.7523 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
YASH312312/distilroberta-base-finetuned-wikitext2 | 7ce18d309a12cb877f5a3bdb9025213bba3ef403 | 2022-04-28T10:03:53.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | YASH312312 | null | YASH312312/distilroberta-base-finetuned-wikitext2 | 1 | null | transformers | 31,484 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1203 | 1.0 | 766 | 2.8510 |
| 2.9255 | 2.0 | 1532 | 2.8106 |
| 2.8669 | 3.0 | 2298 | 2.7515 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
alpaca/wav2vec2-large-xls-r-300m-demo-zhCN | 8d9550da5de162407648ff2c928c7af01a5fe117 | 2022-05-05T01:22:51.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | alpaca | null | alpaca/wav2vec2-large-xls-r-300m-demo-zhCN | 1 | null | transformers | 31,485 | Entry not found |
PSW/random_sim_ins3_seed42 | 76d52ab01bf4ceda21ad11fd1977c6ef0beca26e | 2022-04-27T17:33:00.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_ins3_seed42 | 1 | null | transformers | 31,486 | Entry not found |
lsb/wav2vec2-base-pem123-960h-la | c7f0a5e91c7a036abc968b5a4f7c937b7a4ba723 | 2022-05-03T22:06:04.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lsb | null | lsb/wav2vec2-base-pem123-960h-la | 1 | null | transformers | 31,487 | Entry not found |
anshr/distilgpt2_trained_policy_model_02 | 7febd7eea02c60d8468b2b53c1f0981f02517973 | 2022-04-27T18:32:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | anshr | null | anshr/distilgpt2_trained_policy_model_02 | 1 | null | transformers | 31,488 | Entry not found |
PSW/random_sim_swap2_seed27 | 11eb0dee2ffe7a729babe00263c88aceaecff575 | 2022-04-27T19:27:09.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_swap2_seed27 | 1 | null | transformers | 31,489 | Entry not found |
iamholmes/english-phrases-bible | 3c9e8d771089fbd9e07d11cd086aa6b9ce3477a2 | 2022-04-27T19:48:58.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | iamholmes | null | iamholmes/english-phrases-bible | 1 | null | sentence-transformers | 31,490 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/msmarco-distilbert-base-tas-b
This is a port of the [DistilBert TAS-B Model](https://huggingface.co/sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and is optimized for the task of semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-tas-b')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#CLS Pooling - Take output from first token
def cls_pooling(model_output):
return model_output.last_hidden_state[:,0]
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = cls_pooling(model_output)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-distilbert-base-tas-b")
model = AutoModel.from_pretrained("sentence-transformers/msmarco-distilbert-base-tas-b")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-base-tas-b)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Have a look at: [DistilBert TAS-B Model](https://huggingface.co/sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco) |
bdickson/distilbert-base-uncased-finetuned-squad | 8f5332f0db8082643a2f0e5dbfd62bd184bf927e | 2022-04-28T09:59:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | bdickson | null | bdickson/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 31,491 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2299 | 1.0 | 5533 | 1.1673 |
| 0.9564 | 2.0 | 11066 | 1.1223 |
| 0.7572 | 3.0 | 16599 | 1.1617 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
rdchambers/distilbert-base-uncased-finetune | 3182a7cfdfe05d135603e633f5790ebe49534d11 | 2022-04-27T20:48:51.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | rdchambers | null | rdchambers/distilbert-base-uncased-finetune | 1 | null | transformers | 31,492 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetune
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0149
- Precision: 0.8458
- Recall: 0.8060
- F1: 0.8255
- Accuracy: 0.9954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 48 | 0.0556 | 0.5372 | 0.1902 | 0.2809 | 0.9838 |
| No log | 2.0 | 96 | 0.0171 | 0.8320 | 0.8023 | 0.8169 | 0.9951 |
| No log | 3.0 | 144 | 0.0149 | 0.8458 | 0.8060 | 0.8255 | 0.9954 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PSW/random_sim_swap2_seed42 | 02b2322005ad26e9e3e51fad276adfdcd0ff693c | 2022-04-27T20:24:07.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_swap2_seed42 | 1 | null | transformers | 31,493 | Entry not found |
chv5/t5-small-shuffled_take1 | a848a1745b6e756489d28ef880a39cd523fe2fef | 2022-04-28T03:36:55.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chv5 | null | chv5/t5-small-shuffled_take1 | 1 | null | transformers | 31,494 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-shuffled_take1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 11.9641
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-shuffled_take1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1788
- Rouge1: 11.9641
- Rouge2: 10.5245
- Rougel: 11.5825
- Rougelsum: 11.842
- Gen Len: 18.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.2238 | 1.0 | 34008 | 0.1788 | 11.9641 | 10.5245 | 11.5825 | 11.842 | 18.9838 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
simonnedved/bert-seg-v1 | 4ecc008053956f333919aec89cd98a17ab948446 | 2022-04-28T00:02:35.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | simonnedved | null | simonnedved/bert-seg-v1 | 1 | null | transformers | 31,495 | ---
license: apache-2.0
---
|
ToToKr/mbart-large-cc25-finetuned-en-to-ko2 | 1f5e06513efdfdb86698b9be58d5d5be141c5d08 | 2022-04-28T07:10:07.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ToToKr | null | ToToKr/mbart-large-cc25-finetuned-en-to-ko2 | 1 | null | transformers | 31,496 | ---
tags:
- generated_from_trainer
model-index:
- name: mbart-large-cc25-finetuned-en-to-ko2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-finetuned-en-to-ko2
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
charityking2358/taglish-electra-30K | 1ed4c9212dea803bf5bada4c68acdab6d34142b6 | 2022-04-28T04:00:56.000Z | [
"pytorch",
"transformers"
] | null | false | charityking2358 | null | charityking2358/taglish-electra-30K | 1 | null | transformers | 31,497 | Entry not found |
obokkkk/mt5-base | 92003817d907c24d5e1c7f776e46aaa58f788080 | 2022-04-29T02:04:16.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | obokkkk | null | obokkkk/mt5-base | 1 | null | transformers | 31,498 | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2760
- Bleu: 8.6707
- Gen Len: 16.9319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 183 | 1.4997 | 6.2141 | 17.0073 |
| No log | 2.0 | 366 | 1.3718 | 7.4647 | 16.9205 |
| 1.9408 | 3.0 | 549 | 1.3184 | 8.1938 | 16.8962 |
| 1.9408 | 4.0 | 732 | 1.2857 | 8.5265 | 16.9167 |
| 1.9408 | 5.0 | 915 | 1.2760 | 8.6707 | 16.9319 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
A2/kogpt2-taf | a4a74a7da4fe470cf25f02fa866355e8b4818cb8 | 2022-05-11T21:01:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:apache-2.0"
] | text-generation | false | A2 | null | A2/kogpt2-taf | 1 | 1 | transformers | 31,499 | ---
license: apache-2.0
---
Grepp KDT AI 3๊ธฐ ๊ณผ์ ํ๋ก์ ํธ.
[SKT-AI/KoGPT2](https://github.com/SKT-AI/KoGPT2) ๋ชจ๋ธ์ ๊ธฐ๋ฐ. ๋ชจ๋์ ๋ง๋ญ์น์ 2021 ๋ด์ค ๋ง๋ญ์น๋ฅผ ์ถ๊ฐ๋ก ์ธ์ด๋ชจ๋ธ๋ง ํ์ต ํ, 5๋ ์ผ๊ฐ์ง(์กฐ์ ์ผ๋ณด, ์ค์์ผ๋ณด, ๋์์ผ๋ณด, ํ๊ฒจ๋ , ๊ฒฝํฅ์ ๋ฌธ)๋ณ ๊ฐ ๋ง์ฌ๊ฐ์ ์ฌ์ค๋ก ๋ฏธ์ธ์กฐ์ ํ์์.
๋งค์ผ ๋ฐฑ์ฌ๊ฐ์ ์ฌ์ค๋ก ์ถ๊ฐ ๋ฏธ์ธ์กฐ์ ํ์ฌ ์ต์ ์ ์น์ ์ด์์ ๊ดํ ํ
์คํธ๋ ์ ์์ฑํจ.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.