modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
5p33ch3xpr/XLS-R_Finetuned | 5p33ch3xpr | 2022-12-01T20:55:03Z | 107 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-10-29T16:49:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: XLS-R_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R_Finetuned
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2280
- Wer: 0.1725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.0094 | 0.32 | 500 | 3.5637 | 1.0 |
| 3.3935 | 0.64 | 1000 | 2.6589 | 1.0 |
| 1.5455 | 0.95 | 1500 | 0.7979 | 0.8225 |
| 0.9065 | 1.27 | 2000 | 0.5392 | 0.6244 |
| 0.7891 | 1.59 | 2500 | 0.3554 | 0.4551 |
| 0.7118 | 1.91 | 3000 | 0.3682 | 0.4608 |
| 0.6061 | 2.23 | 3500 | 0.3384 | 0.4416 |
| 0.5536 | 2.54 | 4000 | 0.2987 | 0.4042 |
| 0.547 | 2.86 | 4500 | 0.2892 | 0.3892 |
| 0.4841 | 3.18 | 5000 | 0.2890 | 0.3690 |
| 0.4434 | 3.5 | 5500 | 0.2605 | 0.3636 |
| 0.4542 | 3.81 | 6000 | 0.2932 | 0.3773 |
| 0.4171 | 4.13 | 6500 | 0.2768 | 0.3550 |
| 0.3697 | 4.45 | 7000 | 0.2443 | 0.3382 |
| 0.3776 | 4.77 | 7500 | 0.2572 | 0.3366 |
| 0.3448 | 5.09 | 8000 | 0.2267 | 0.3006 |
| 0.3285 | 5.4 | 8500 | 0.2377 | 0.3023 |
| 0.3165 | 5.72 | 9000 | 0.2344 | 0.2888 |
| 0.3194 | 6.04 | 9500 | 0.2228 | 0.2699 |
| 0.2737 | 6.36 | 10000 | 0.2201 | 0.2754 |
| 0.2986 | 6.68 | 10500 | 0.2413 | 0.2850 |
| 0.2836 | 6.99 | 11000 | 0.2117 | 0.2629 |
| 0.2467 | 7.31 | 11500 | 0.2408 | 0.2877 |
| 0.2577 | 7.63 | 12000 | 0.2134 | 0.2448 |
| 0.2503 | 7.95 | 12500 | 0.2260 | 0.2600 |
| 0.2371 | 8.26 | 13000 | 0.2081 | 0.2379 |
| 0.2303 | 8.58 | 13500 | 0.2322 | 0.2668 |
| 0.213 | 8.9 | 14000 | 0.2339 | 0.2586 |
| 0.2029 | 9.22 | 14500 | 0.2300 | 0.2704 |
| 0.2146 | 9.54 | 15000 | 0.2321 | 0.2533 |
| 0.2044 | 9.85 | 15500 | 0.2393 | 0.2685 |
| 0.2008 | 10.17 | 16000 | 0.2193 | 0.2467 |
| 0.182 | 10.49 | 16500 | 0.2323 | 0.2611 |
| 0.2 | 10.81 | 17000 | 0.2188 | 0.2537 |
| 0.1855 | 11.13 | 17500 | 0.2436 | 0.2523 |
| 0.1745 | 11.44 | 18000 | 0.2351 | 0.2473 |
| 0.1705 | 11.76 | 18500 | 0.2556 | 0.2663 |
| 0.1745 | 12.08 | 19000 | 0.2189 | 0.2229 |
| 0.1641 | 12.4 | 19500 | 0.2192 | 0.2342 |
| 0.1546 | 12.71 | 20000 | 0.2432 | 0.2228 |
| 0.1661 | 13.03 | 20500 | 0.2323 | 0.2242 |
| 0.1436 | 13.35 | 21000 | 0.2554 | 0.2496 |
| 0.1443 | 13.67 | 21500 | 0.2195 | 0.2026 |
| 0.151 | 13.99 | 22000 | 0.2400 | 0.2201 |
| 0.1333 | 14.3 | 22500 | 0.2181 | 0.2235 |
| 0.137 | 14.62 | 23000 | 0.2400 | 0.2254 |
| 0.1303 | 14.94 | 23500 | 0.2265 | 0.2088 |
| 0.1386 | 15.26 | 24000 | 0.2330 | 0.2152 |
| 0.1325 | 15.58 | 24500 | 0.2328 | 0.2127 |
| 0.1227 | 15.89 | 25000 | 0.2375 | 0.2077 |
| 0.1196 | 16.21 | 25500 | 0.2394 | 0.2144 |
| 0.1197 | 16.53 | 26000 | 0.2591 | 0.2171 |
| 0.1122 | 16.85 | 26500 | 0.2383 | 0.2066 |
| 0.1093 | 17.16 | 27000 | 0.2254 | 0.2042 |
| 0.105 | 17.48 | 27500 | 0.2330 | 0.2008 |
| 0.0982 | 17.8 | 28000 | 0.2317 | 0.1902 |
| 0.1072 | 18.12 | 28500 | 0.2332 | 0.1971 |
| 0.1033 | 18.44 | 29000 | 0.2313 | 0.1923 |
| 0.0982 | 18.75 | 29500 | 0.2344 | 0.1934 |
| 0.103 | 19.07 | 30000 | 0.2295 | 0.1902 |
| 0.0945 | 19.39 | 30500 | 0.2352 | 0.1976 |
| 0.0892 | 19.71 | 31000 | 0.2414 | 0.1920 |
| 0.1003 | 20.03 | 31500 | 0.2300 | 0.1879 |
| 0.0861 | 20.34 | 32000 | 0.2215 | 0.1778 |
| 0.0845 | 20.66 | 32500 | 0.2321 | 0.1866 |
| 0.0858 | 20.98 | 33000 | 0.2311 | 0.1850 |
| 0.0785 | 21.3 | 33500 | 0.2341 | 0.1874 |
| 0.0786 | 21.61 | 34000 | 0.2322 | 0.1916 |
| 0.0793 | 21.93 | 34500 | 0.2358 | 0.1846 |
| 0.0772 | 22.25 | 35000 | 0.2234 | 0.1770 |
| 0.0786 | 22.57 | 35500 | 0.2180 | 0.1758 |
| 0.0747 | 22.89 | 36000 | 0.2269 | 0.1830 |
| 0.0734 | 23.2 | 36500 | 0.2320 | 0.1860 |
| 0.067 | 23.52 | 37000 | 0.2324 | 0.1797 |
| 0.0733 | 23.84 | 37500 | 0.2324 | 0.1772 |
| 0.0701 | 24.16 | 38000 | 0.2293 | 0.1737 |
| 0.0691 | 24.48 | 38500 | 0.2303 | 0.1750 |
| 0.0613 | 24.79 | 39000 | 0.2280 | 0.1725 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
unstructuredio/donut-base-sroie | unstructuredio | 2022-12-01T20:45:49Z | 131 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2022-12-01T15:48:28Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie-long
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie-long
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.7.0
- Tokenizers 0.11.0
|
fathyshalab/all-roberta-large-v1-auto_and_commute-4-16-5 | fathyshalab | 2022-12-01T20:40:16Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:55:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-auto_and_commute-4-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-auto_and_commute-4-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2614
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 |
| 2.267 | 2.0 | 2 | 2.4558 | 0.3533 |
| 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 |
| 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 |
| 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
facebook/esm2_t36_3B_UR50D | facebook | 2022-12-01T20:22:22Z | 3,892,233 | 18 | transformers | [
"transformers",
"pytorch",
"tf",
"esm",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-10-13T12:38:30Z | ---
license: mit
widget:
- text: "MQIFVKTLTGKTITLEVEPS<mask>TIENVKAKIQDKEGIPPDQQRLIFAGKQLEDGRTLSDYNIQKESTLHLVLRLRGG"
---
## ESM-2
ESM-2 is a state-of-the-art protein model trained on a masked language modelling objective. It is suitable for fine-tuning on a wide range of tasks that take protein sequences as input. For detailed information on the model architecture and training data, please refer to the [accompanying paper](https://www.biorxiv.org/content/10.1101/2022.07.20.500902v2). You may also be interested in some demo notebooks ([PyTorch](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb), [TensorFlow](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb)) which demonstrate how to fine-tune ESM-2 models on your tasks of interest.
Several ESM-2 checkpoints are available in the Hub with varying sizes. Larger sizes generally have somewhat better accuracy, but require much more memory and time to train:
| Checkpoint name | Num layers | Num parameters |
|------------------------------|----|----------|
| [esm2_t48_15B_UR50D](https://huggingface.co/facebook/esm2_t48_15B_UR50D) | 48 | 15B |
| [esm2_t36_3B_UR50D](https://huggingface.co/facebook/esm2_t36_3B_UR50D) | 36 | 3B |
| [esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D) | 33 | 650M |
| [esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) | 30 | 150M |
| [esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) | 12 | 35M |
| [esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) | 6 | 8M | |
RomeroRZ/style-eternos | RomeroRZ | 2022-12-01T20:16:55Z | 0 | 11 | null | [
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2022-11-30T18:55:09Z | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
---


#### Eternos - A surrealist / Minimalist model
(2.0 work in progress yay!)
With a base instances images generated from multiples surrealism art, some Dali touches and roman / greek architecture influences
the 704 version is more abstract and less style transferrable due to the higher resolution towards all regular styles.
Tips : use init image (even stretched) for non-standart resolution, it can help SD a lot to guide it :)
instance prompt : **romerorzeternos** (optional)
You can find cool prompts with associated outputs on my website : **[romerorz.art](https://www.romerorz.art/)**
|
fathyshalab/all-roberta-large-v1-auto_and_commute-2-16-5 | fathyshalab | 2022-12-01T19:52:06Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:51:39Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-auto_and_commute-2-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-auto_and_commute-2-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2614
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 |
| 2.267 | 2.0 | 2 | 2.4558 | 0.3533 |
| 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 |
| 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 |
| 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ameerTelbani/ameeeer | ameerTelbani | 2022-12-01T19:49:18Z | 186 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-01T19:49:03Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ameeeer
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8656716346740723
---
# ameeeer
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
futuredatascience/from-classifier-v2 | futuredatascience | 2022-12-01T19:42:12Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-12-01T19:42:02Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 53 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1060,
"warmup_steps": 106,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
nvidia/nemo-megatron-mt5-3B | nvidia | 2022-12-01T19:34:02Z | 30 | 12 | nemo | [
"nemo",
"pytorch",
"seq2seq",
"masked language modeling",
"multilingual",
"ja",
"en",
"it",
"lv",
"ru",
"hu",
"zh",
"pl",
"el",
"de",
"cs",
"ko",
"hi",
"no",
"da",
"sk",
"fr",
"pt",
"lt",
"es",
"nl",
"sv",
"ro",
"fi",
"dataset:mc4",
"arxiv:2010.11934",
"arxiv:1910.10683",
"arxiv:1809.05053",
"arxiv:1909.08053",
"license:cc-by-4.0",
"region:us"
] | null | 2022-09-22T19:46:28Z | ---
language:
- ja
- en
- it
- lv
- ru
- hu
- zh
- pl
- el
- de
- cs
- ko
- hi
- no
- da
- sk
- fr
- pt
- lt
- es
- nl
- sv
- ro
- fi
library_name: nemo
datasets:
- mc4
tags:
- pytorch
- seq2seq
- masked language modeling
- multilingual
license: cc-by-4.0
---
# NeMo Megatron-mT5 3B
<style>
img {
display: inline;
}
</style>
|[](#model-architecture)|[](#model-architecture)|[](#datasets)
## Model Description
NeMo Megatron-mT5 3B is a *multilingual* transformer-based masked language model. [mT5](https://arxiv.org/abs/2010.11934) [1] is a class of encoder-decoder models trained with a span-based masked language modeling objective on a dataset comprising documents from many different languages. We follow the [T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1) approach of pre-training using only the masked language modeling objective. It has Tensor Parallelism (TP) of 2, Pipeline Parallelism (PP) of 1 and should fit on a single NVIDIA GPU for inference and 2 A100 80G GPUs for finetuning.
This model was trained with [NeMo Megatron](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html).
**NOTE**: Weights are distributed in bfloat16.
## List of Languages
We pre-trained our mT5 model on the following languages from the [mC4](https://github.com/allenai/allennlp/discussions/5265) dataset.
1. Japanese
2. English
3. Italian
4. Latvian
5. Russian
6. Hungarian
7. Chinese
8. Polish
9. Greek
10. German
11. Czech
12. Korean
13. Hindi
14. Norwegian
15. Danish
16. Slovak
17. French
18. Portuguese
19. Lithuanian
20. Spanish
21. Dutch
22. Swedish
23. Romanian
24. Finnish
*NOTE*: The English data used to train our model is the smaller "clean" version (C4) used in the [T5 paper](https://arxiv.org/abs/1910.10683) and not the larger one distributed as part of mC4.
## Getting started
### Step 1: Install NeMo and dependencies
You will need to install NVIDIA Apex and NeMo.
```
git clone https://github.com/ericharper/apex.git
cd apex
git checkout nm_v1.11.0
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
```
```
pip install nemo_toolkit['nlp']==1.12.0
```
Alternatively, you can use NeMo Megatron training docker container with all dependencies pre-installed - [https://developer.nvidia.com/nemo-megatron-open-beta?nvid=nv-int-tblg-249896](https://developer.nvidia.com/nemo-megatron-open-beta)
### Step 2: Run inference
**Note.** The model has been trained with Tensor Parallelism (TP) of 2 and Pipeline Parallelism (PP) of 1, but it should be possible to run inference with tensor parallel size 1 on most NVIDIA GPUs
```
git clone https://github.com/NVIDIA/NeMo.git
cd NeMo/examples/nlp/language_modeling
git checkout r1.12.0
python megatron_t5_eval.py \
--model_file nemo_megatron_mt5_3b_bf16_tp2.nemo \
--prompt "La capitale de la France est <mask>" \
--tensor_model_parallel_size 2
```
The script will automatically replace all \<mask\> tokens with the appropriate sentinel tokens used while pre-training and attempt to fill them in autoregressively with greedy decoding.
*Expected Response*:
```
{
'prompt': 'La capitale de la France est <mask>',
'completion': {
'text': 'Paris',
'tokens': [(4586, '▁Paris', 0.0)]},
'masked_input': '▁La ▁capital e ▁de ▁la ▁France ▁est ▁<extra_id_0>'
}
```
- prompt: The provided raw prompt as input
- completion:
- text: The final generated text from the model along with special/sentinel tokens besides \</s\>
- tokens: Each individual subword that is generated along with its log-probability.
- masked_input: The original raw prompt with <mask> replaced with appropriate sentinel tokens.
## Training Data
The model was trained on the [mC4](https://github.com/allenai/allennlp/discussions/5265) dataset made available by AI2 and hosted on Huggingface.
## Evaluation results
Zero-shot language transformer performance on the [XNLI](https://arxiv.org/abs/1809.05053) dataset for a model fine-tuned on MNLI.
| English | Spanish | German | French | Chinese|
|---|---| ---|---|---|
|89.4|86.4|84.5|85.8|79.9|
## Limitations
The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
## References
[1] [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
[2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
[4] [XNLI: Evaluating Cross-lingual Sentence Representations](https://arxiv.org/abs/1809.05053)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
alanila/autotrain-training-2307973005 | alanila | 2022-12-01T19:32:39Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:alanila/autotrain-data-training",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T19:29:41Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- alanila/autotrain-data-training
co2_eq_emissions:
emissions: 3.7679548759427006
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2307973005
- CO2 Emissions (in grams): 3.7680
## Validation Metrics
- Loss: 1.098
- Accuracy: 0.508
- Macro F1: 0.559
- Micro F1: 0.508
- Weighted F1: 0.452
- Macro Precision: 0.610
- Micro Precision: 0.508
- Weighted Precision: 0.537
- Macro Recall: 0.581
- Micro Recall: 0.508
- Weighted Recall: 0.508
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/alanila/autotrain-training-2307973005
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alanila/autotrain-training-2307973005", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alanila/autotrain-training-2307973005", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
juancopi81/sd-class-cryptopunks-64 | juancopi81 | 2022-12-01T19:24:22Z | 37 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-01T19:21:45Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cryptopunks.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(juancopi81/sd-class-cryptopunks-64)
image = pipeline().images[0]
image
```
|
drewski/distilbert-base-uncased-finetuned-cola | drewski | 2022-12-01T19:09:26Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T18:58:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5258252097729852
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5561
- Matthews Correlation: 0.5258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5245 | 1.0 | 535 | 0.5269 | 0.4122 |
| 0.3513 | 2.0 | 1070 | 0.4976 | 0.4999 |
| 0.2411 | 3.0 | 1605 | 0.5561 | 0.5258 |
| 0.1907 | 4.0 | 2140 | 0.7641 | 0.5174 |
| 0.1409 | 5.0 | 2675 | 0.8216 | 0.5189 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
shahukareem/sd-class-butterflies-64 | shahukareem | 2022-12-01T19:08:22Z | 32 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-01T19:07:58Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('shahukareem/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
fathyshalab/all-roberta-large-v1-home-9-16-5 | fathyshalab | 2022-12-01T19:02:24Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:47:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-home-9-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-home-9-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3789
- Accuracy: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 |
| 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 |
| 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 |
| 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 |
| 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
shrinivasbjoshi/r2-w266-setfit-mbti-multiclass-hypsearch-mpnet-nov30 | shrinivasbjoshi | 2022-12-01T18:17:06Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-12-01T18:16:51Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2560 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 4.2848872506915845e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2560,
"warmup_steps": 256,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
fathyshalab/all-roberta-large-v1-home-6-16-5 | fathyshalab | 2022-12-01T17:37:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:41:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-home-6-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-home-6-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3789
- Accuracy: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 |
| 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 |
| 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 |
| 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 |
| 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
VlakoResker/sd-class-butterflies-32 | VlakoResker | 2022-12-01T17:28:55Z | 32 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-01T17:28:38Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('VlakoResker/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
fathyshalab/all-roberta-large-v1-home-5-16-5 | fathyshalab | 2022-12-01T17:10:56Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:39:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-home-5-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-home-5-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3789
- Accuracy: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 |
| 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 |
| 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 |
| 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 |
| 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bowwwave/sd-class-butterflies-32 | bowwwave | 2022-12-01T16:52:39Z | 32 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-01T16:52:24Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('bowwwave/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
varunsappa/finetuning-sentiment-model-3000-samples | varunsappa | 2022-12-01T16:23:25Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T16:09:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8833333333333333
- name: F1
type: f1
value: 0.8844884488448845
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3132
- Accuracy: 0.8833
- F1: 0.8845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fanpu/model_output_original_subreddit-AskScienceFiction_1 | fanpu | 2022-12-01T15:13:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-01T06:12:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: model_output_original_subreddit-AskScienceFiction_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_output_original_subreddit-AskScienceFiction_1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9231 | 0.3 | 500 | 3.8087 |
| 3.8459 | 0.6 | 1000 | 3.7766 |
| 3.8217 | 0.9 | 1500 | 3.7372 |
| 3.6939 | 1.21 | 2000 | 3.7237 |
| 3.6745 | 1.51 | 2500 | 3.7030 |
| 3.6757 | 1.81 | 3000 | 3.6811 |
| 3.5099 | 2.11 | 3500 | 3.6839 |
| 3.505 | 2.41 | 4000 | 3.6709 |
| 3.5232 | 2.71 | 4500 | 3.6515 |
| 3.3416 | 3.01 | 5000 | 3.6563 |
| 3.3725 | 3.32 | 5500 | 3.6496 |
| 3.3672 | 3.62 | 6000 | 3.6373 |
| 3.3495 | 3.92 | 6500 | 3.6280 |
| 3.2464 | 4.22 | 7000 | 3.6439 |
| 3.2467 | 4.52 | 7500 | 3.6415 |
| 3.2473 | 4.82 | 8000 | 3.6407 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
YeaHi/diffusion | YeaHi | 2022-12-01T15:11:02Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-12-01T15:11:02Z | ---
license: bigscience-openrail-m
---
|
arrafmousa/xlnet-base-cased-finetuned-squad | arrafmousa | 2022-12-01T15:02:55Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-01T13:27:48Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlnet-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-finetuned-squad
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 203 | 0.2186 |
| No log | 2.0 | 406 | 0.1985 |
| 0.4204 | 3.0 | 609 | 0.1093 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
mousaazari/t5-text2sql_v1 | mousaazari | 2022-12-01T13:46:33Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-08-15T12:11:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-text2sql_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-text2sql_v1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0772
- Rouge2 Precision: 0.8835
- Rouge2 Recall: 0.39
- Rouge2 Fmeasure: 0.5088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| No log | 1.0 | 11 | 1.9420 | 0.0755 | 0.022 | 0.0323 |
| No log | 2.0 | 22 | 1.2731 | 0.0912 | 0.0263 | 0.039 |
| No log | 3.0 | 33 | 0.8717 | 0.0993 | 0.0284 | 0.0424 |
| No log | 4.0 | 44 | 0.5705 | 0.1014 | 0.032 | 0.0464 |
| No log | 5.0 | 55 | 0.3929 | 0.4151 | 0.1528 | 0.2149 |
| No log | 6.0 | 66 | 0.2911 | 0.7778 | 0.351 | 0.4594 |
| No log | 7.0 | 77 | 0.2290 | 0.781 | 0.3305 | 0.4395 |
| No log | 8.0 | 88 | 0.1995 | 0.7381 | 0.2992 | 0.4018 |
| No log | 9.0 | 99 | 0.1768 | 0.752 | 0.3147 | 0.4202 |
| No log | 10.0 | 110 | 0.1554 | 0.7242 | 0.3136 | 0.412 |
| No log | 11.0 | 121 | 0.1446 | 0.8128 | 0.3583 | 0.4694 |
| No log | 12.0 | 132 | 0.1337 | 0.8194 | 0.3653 | 0.478 |
| No log | 13.0 | 143 | 0.1264 | 0.8088 | 0.3564 | 0.4675 |
| No log | 14.0 | 154 | 0.1170 | 0.8036 | 0.3502 | 0.462 |
| No log | 15.0 | 165 | 0.1078 | 0.8851 | 0.3981 | 0.5188 |
| No log | 16.0 | 176 | 0.1046 | 0.8716 | 0.3864 | 0.5054 |
| No log | 17.0 | 187 | 0.1007 | 0.8753 | 0.3851 | 0.5042 |
| No log | 18.0 | 198 | 0.0951 | 0.8756 | 0.3941 | 0.5126 |
| No log | 19.0 | 209 | 0.0928 | 0.8414 | 0.3565 | 0.4708 |
| No log | 20.0 | 220 | 0.0894 | 0.854 | 0.3642 | 0.4808 |
| No log | 21.0 | 231 | 0.0863 | 0.8954 | 0.3954 | 0.5168 |
| No log | 22.0 | 242 | 0.0832 | 0.888 | 0.3931 | 0.5122 |
| No log | 23.0 | 253 | 0.0828 | 0.8835 | 0.39 | 0.5088 |
| No log | 24.0 | 264 | 0.0820 | 0.8835 | 0.39 | 0.5088 |
| No log | 25.0 | 275 | 0.0803 | 0.8835 | 0.39 | 0.5088 |
| No log | 26.0 | 286 | 0.0792 | 0.8835 | 0.39 | 0.5088 |
| No log | 27.0 | 297 | 0.0784 | 0.8761 | 0.3886 | 0.5066 |
| No log | 28.0 | 308 | 0.0775 | 0.8835 | 0.39 | 0.5088 |
| No log | 29.0 | 319 | 0.0772 | 0.8835 | 0.39 | 0.5088 |
| No log | 30.0 | 330 | 0.0772 | 0.8835 | 0.39 | 0.5088 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
MGanesh29/distilbert-base-uncased-finetuned-cola-v5 | MGanesh29 | 2022-12-01T13:40:01Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T10:54:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-base-uncased-finetuned-cola-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola-v5
This model is a fine-tuned version of [MGanesh29/distilbert-base-uncased-finetuned-cola-v5](https://huggingface.co/MGanesh29/distilbert-base-uncased-finetuned-cola-v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2563
- Accuracy: 0.9310
- Precision: 0.9310
- Recall: 0.9310
- F1: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 6.25 | 50 | 0.2638 | 0.9310 | 0.9310 | 0.9310 | 0.9310 |
| No log | 12.5 | 100 | 0.2607 | 0.9310 | 0.9310 | 0.9310 | 0.9310 |
| No log | 18.75 | 150 | 0.2643 | 0.9310 | 0.9310 | 0.9310 | 0.9310 |
| No log | 25.0 | 200 | 0.2563 | 0.9310 | 0.9310 | 0.9310 | 0.9310 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-home-2-16-5 | fathyshalab | 2022-12-01T13:02:46Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:33:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-home-2-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-home-2-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3789
- Accuracy: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 |
| 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 |
| 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 |
| 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 |
| 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
manirai91/enlm-roberta-conll2003-final | manirai91 | 2022-12-01T12:28:17Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-01T11:02:56Z | ---
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: enlm-roberta-conll2003-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlm-roberta-conll2003-final
This model is a fine-tuned version of [manirai91/enlm-roberta-final](https://huggingface.co/manirai91/enlm-roberta-final) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-kitchen_and_dining-9-16-5 | fathyshalab | 2022-12-01T12:11:49Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:30:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-kitchen_and_dining-9-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-kitchen_and_dining-9-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3560
- Accuracy: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7421 | 1.0 | 1 | 2.5878 | 0.2012 |
| 2.1065 | 2.0 | 2 | 2.4975 | 0.2012 |
| 1.5994 | 3.0 | 3 | 2.4274 | 0.2249 |
| 1.1739 | 4.0 | 4 | 2.3808 | 0.2456 |
| 1.083 | 5.0 | 5 | 2.3560 | 0.2692 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ViktorDo/DistilBERT-POWO_MGH_Life_Form_Finetuned | ViktorDo | 2022-12-01T11:55:19Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T11:45:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DistilBERT-POWO_MGH_Life_Form_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-POWO_MGH_Life_Form_Finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5891 | 1.0 | 914 | 0.4130 |
| 0.4207 | 2.0 | 1828 | 0.3868 |
| 0.3722 | 3.0 | 2742 | 0.3845 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
hizak/sd-class-butterflies-64 | hizak | 2022-12-01T11:52:54Z | 33 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-01T11:52:01Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(hizak/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
fathyshalab/all-roberta-large-v1-kitchen_and_dining-8-16-5 | fathyshalab | 2022-12-01T11:45:56Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:28:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-kitchen_and_dining-8-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-kitchen_and_dining-8-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3560
- Accuracy: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7421 | 1.0 | 1 | 2.5878 | 0.2012 |
| 2.1065 | 2.0 | 2 | 2.4975 | 0.2012 |
| 1.5994 | 3.0 | 3 | 2.4274 | 0.2249 |
| 1.1739 | 4.0 | 4 | 2.3808 | 0.2456 |
| 1.083 | 5.0 | 5 | 2.3560 | 0.2692 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-kitchen_and_dining-7-16-5 | fathyshalab | 2022-12-01T11:20:14Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:26:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-kitchen_and_dining-7-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-kitchen_and_dining-7-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3560
- Accuracy: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7421 | 1.0 | 1 | 2.5878 | 0.2012 |
| 2.1065 | 2.0 | 2 | 2.4975 | 0.2012 |
| 1.5994 | 3.0 | 3 | 2.4274 | 0.2249 |
| 1.1739 | 4.0 | 4 | 2.3808 | 0.2456 |
| 1.083 | 5.0 | 5 | 2.3560 | 0.2692 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
rls-telefonica/word_sense_mchoice_w_d_c | rls-telefonica | 2022-12-01T11:13:31Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2022-12-01T10:46:55Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: word_sense_mchoice_w_d_c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# word_sense_mchoice_w_d_c
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8885
- Accuracy: 0.8210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6904 | 1.0 | 531 | 0.5099 | 0.7913 |
| 0.2393 | 2.0 | 1062 | 0.6351 | 0.8202 |
| 0.0842 | 3.0 | 1593 | 0.8885 | 0.8210 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Earrr/Disco | Earrr | 2022-12-01T11:10:20Z | 0 | 0 | null | [
"region:us"
] | null | 2022-12-01T11:03:56Z | I don't own this model
I uploaded it for personal use
please contact me to delete if you are the auther
[email protected] |
hizak/sd-class-butterflies-32 | hizak | 2022-12-01T11:09:16Z | 37 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-01T10:12:45Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(hizak/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
AlekseyKorshuk/6.7b-ri-reproduce-combined-4-gpu-0-val | AlekseyKorshuk | 2022-12-01T10:15:15Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:ChaiML/dalio_combined_v1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-30T10:39:53Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- ChaiML/dalio_combined_v1
model-index:
- name: 6.7b-ri-reproduce-combined-4-gpu-0-val
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6.7b-ri-reproduce-combined-4-gpu-0-val
This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the ChaiML/dalio_combined_v1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-07
- train_batch_size: 1
- eval_batch_size: 8
- seed: 100
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ConvLab/lava-policy-multiwoz20 | ConvLab | 2022-12-01T09:59:09Z | 0 | 0 | null | [
"dialogue policy",
"task-oriented dialog",
"en",
"license:apache-2.0",
"region:us"
] | null | 2022-11-29T15:40:46Z | ---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
---
# lava-policy-multiwoz
This is the best performing LAVA_kl model from the [LAVA paper](https://aclanthology.org/2020.coling-main.41/) which can be used as a word-level policy module in ConvLab3 pipeline.
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
The model was trained on MultiWOZ 2.0 data using the [LAVA codebase](https://gitlab.cs.uni-duesseldorf.de/general/dsml/lava-public). The model started with VAE pre-training and fine-tuning with informative prior KL loss, followed by corpus-based RL with REINFORCE.
### Training hyperparameters
The following hyperparameters were used during SL training:
- y_size: 10
- k_size: 20
- beta: 0.1
- simple_posterior: true
- contextual_posterior: false
- learning_rate: 1e-03
- max_vocab_size: 1000
- max_utt_len: 50
- max_dec_len: 30
- backward_size: 2
- train_batch_size: 128
- seed: 58
- optimizer: Adam
- num_epoch: 100 with early stopping based on validation set
The following hyperparameters were used during RL training:
- tune_pi_only: false
- max_words: 100
- temperature: 1.0
- episode_repeat: 1.0
- rl_lr: 0.01
- momentum: 0.0
- nesterov: false
- gamma: 0.99
- rl_clip: 5.0
- random_seed: 38
|
AlekseyKorshuk/6.7b-ri-reproduce-combined-4-gpu-20-val | AlekseyKorshuk | 2022-12-01T09:45:32Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-30T10:26:05Z | ---
license: other
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 6.7b-ri-reproduce-combined-4-gpu-20-val
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6.7b-ri-reproduce-combined-4-gpu-20-val
This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9434
- Accuracy: 0.0329
- Perplexity: 51.5916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-07
- train_batch_size: 1
- eval_batch_size: 8
- seed: 100
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Perplexity |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|
| 2.5731 | 1.0 | 79 | 2.6113 | 0.0317 | 13.6171 |
| 2.206 | 2.0 | 158 | 2.4805 | 0.0328 | 11.9469 |
| 1.9105 | 3.0 | 237 | 2.4512 | 0.0333 | 11.6019 |
| 1.6301 | 4.0 | 316 | 2.5078 | 0.0345 | 12.2780 |
| 1.3733 | 5.0 | 395 | 2.6816 | 0.0342 | 14.6090 |
| 1.1337 | 6.0 | 474 | 3.0078 | 0.0330 | 20.2431 |
| 0.9619 | 7.0 | 553 | 3.1777 | 0.0330 | 23.9923 |
| 0.798 | 8.0 | 632 | 3.2559 | 0.0330 | 25.9419 |
| 0.6653 | 9.0 | 711 | 3.4277 | 0.0331 | 30.8068 |
| 0.552 | 10.0 | 790 | 3.5566 | 0.0333 | 35.0453 |
| 0.4568 | 11.0 | 869 | 3.7324 | 0.0324 | 41.7802 |
| 0.3756 | 12.0 | 948 | 3.8184 | 0.0328 | 45.5295 |
| 0.3119 | 13.0 | 1027 | 3.8477 | 0.0331 | 46.8831 |
| 0.2448 | 14.0 | 1106 | 3.9062 | 0.0329 | 49.7122 |
| 0.1986 | 15.0 | 1185 | 3.9434 | 0.0329 | 51.5916 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-13k_onset-drums_fold_3 | gary109 | 2022-12-01T09:36:58Z | 85 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"dataset:ai_light_dance",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-30T11:17:30Z | ---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
datasets:
- ai_light_dance
model-index:
- name: ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-13k_onset-drums_fold_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-13k_onset-drums_fold_3
This model is a fine-tuned version of [gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-13k_onset-drums_fold_2](https://huggingface.co/gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-13k_onset-drums_fold_2) on the GARY109/AI_LIGHT_DANCE - ONSET-DRUMS_FOLD_3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4093
- Wer: 0.1250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4557 | 1.0 | 70 | 0.5794 | 0.1197 |
| 0.6796 | 2.0 | 140 | 0.5726 | 0.1388 |
| 0.4511 | 3.0 | 210 | 0.6290 | 0.1242 |
| 0.609 | 4.0 | 280 | 0.7112 | 0.1187 |
| 0.4082 | 5.0 | 350 | 0.8275 | 0.1965 |
| 0.4638 | 6.0 | 420 | 0.4767 | 0.1524 |
| 0.4446 | 7.0 | 490 | 0.5091 | 0.1376 |
| 0.4337 | 8.0 | 560 | 0.6622 | 0.1170 |
| 0.4604 | 9.0 | 630 | 0.7242 | 0.1600 |
| 0.4462 | 10.0 | 700 | 0.7298 | 0.1383 |
| 0.4201 | 11.0 | 770 | 0.8058 | 0.1362 |
| 0.4204 | 12.0 | 840 | 0.6255 | 0.1099 |
| 0.461 | 13.0 | 910 | 0.5204 | 0.1109 |
| 0.3779 | 14.0 | 980 | 0.6911 | 0.1125 |
| 0.3403 | 15.0 | 1050 | 0.5863 | 0.1188 |
| 0.6223 | 16.0 | 1120 | 0.6367 | 0.1147 |
| 0.3827 | 17.0 | 1190 | 0.6266 | 0.1293 |
| 0.3055 | 18.0 | 1260 | 0.4866 | 0.1095 |
| 0.3917 | 19.0 | 1330 | 0.4093 | 0.1250 |
| 0.3912 | 20.0 | 1400 | 0.4514 | 0.1077 |
| 0.3861 | 21.0 | 1470 | 0.5043 | 0.1156 |
| 0.3659 | 22.0 | 1540 | 0.5680 | 0.1091 |
| 0.3536 | 23.0 | 1610 | 0.7940 | 0.1029 |
| 0.3559 | 24.0 | 1680 | 0.5877 | 0.1101 |
| 0.3274 | 25.0 | 1750 | 0.4461 | 0.1059 |
| 0.5232 | 26.0 | 1820 | 1.2051 | 0.1068 |
| 0.3241 | 27.0 | 1890 | 0.8716 | 0.1099 |
| 0.3169 | 28.0 | 1960 | 0.6752 | 0.1082 |
| 0.2938 | 29.0 | 2030 | 0.6023 | 0.1071 |
| 0.3022 | 30.0 | 2100 | 0.6122 | 0.1146 |
| 0.4245 | 31.0 | 2170 | 0.5735 | 0.1102 |
| 0.3095 | 32.0 | 2240 | 0.4476 | 0.1042 |
| 0.4062 | 33.0 | 2310 | 0.6339 | 0.1130 |
| 0.3202 | 34.0 | 2380 | 0.4101 | 0.1077 |
| 0.2952 | 35.0 | 2450 | 0.4825 | 0.1076 |
| 0.2945 | 36.0 | 2520 | 0.4998 | 0.1058 |
| 0.336 | 37.0 | 2590 | 0.5490 | 0.1061 |
| 0.2912 | 38.0 | 2660 | 0.4804 | 0.1038 |
| 0.282 | 39.0 | 2730 | 0.4776 | 0.1022 |
| 0.4359 | 40.0 | 2800 | 0.4376 | 0.1044 |
| 0.2698 | 41.0 | 2870 | 0.5609 | 0.1098 |
| 0.3004 | 42.0 | 2940 | 0.5258 | 0.1083 |
| 0.2873 | 43.0 | 3010 | 0.4810 | 0.1069 |
| 0.3413 | 44.0 | 3080 | 0.4961 | 0.1080 |
| 0.2802 | 45.0 | 3150 | 0.6850 | 0.1076 |
| 0.2584 | 46.0 | 3220 | 0.7210 | 0.1082 |
| 0.3282 | 47.0 | 3290 | 0.6179 | 0.1053 |
| 0.2666 | 48.0 | 3360 | 0.7673 | 0.1075 |
| 0.2989 | 49.0 | 3430 | 0.7710 | 0.1079 |
| 0.2676 | 50.0 | 3500 | 0.7655 | 0.1076 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
MGanesh29/distilbert-base-uncased-finetuned-cola-v3 | MGanesh29 | 2022-12-01T09:17:29Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T09:00:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola-v3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9655
- Matthews Correlation: 0.7369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 8 | 1.9112 | 0.1486 |
| No log | 2.0 | 16 | 1.8626 | 0.1273 |
| No log | 3.0 | 24 | 1.7793 | 0.1947 |
| No log | 4.0 | 32 | 1.6722 | 0.1681 |
| No log | 5.0 | 40 | 1.5578 | 0.3876 |
| No log | 6.0 | 48 | 1.4463 | 0.5551 |
| No log | 7.0 | 56 | 1.3280 | 0.5498 |
| No log | 8.0 | 64 | 1.2302 | 0.5936 |
| No log | 9.0 | 72 | 1.1408 | 0.6998 |
| No log | 10.0 | 80 | 1.0765 | 0.6601 |
| No log | 11.0 | 88 | 1.0145 | 0.6988 |
| No log | 12.0 | 96 | 0.9655 | 0.7369 |
| No log | 13.0 | 104 | 0.9389 | 0.6992 |
| No log | 14.0 | 112 | 0.9258 | 0.6992 |
| No log | 15.0 | 120 | 0.9209 | 0.6992 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
hr-elrond/autotrain-consumer-nature-speech_finbert-2147169289 | hr-elrond | 2022-12-01T08:59:48Z | 100 | 2 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:hr-elrond/autotrain-data-consumer-nature-speech_finbert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-18T15:00:49Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- hr-elrond/autotrain-data-consumer-nature-speech_finbert
co2_eq_emissions:
emissions: 0.004371975254312265
---
# Model Trained Using AutoTrain
We trained FinBERT to identify whether firms´ talk contains consumer concepts of human nature (e.g., "I believe consumers generally act rational.", "Consumers must take over responsibility for the choices they make.", "It seems consumers behave quite altruistic.") from statements that do not (e.g., "We expect buyers to double their purchases next year.", "We see a 5% growth in numbers compared to the previous year.").
The training data consisted of 236 positive documents (containing concepts of consumer nature) and 1034 negative documents (not contain concepts of consumer nature) extracted from earnings call transcripts of S&P-500 companies (2015-2020).
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2147169289
- CO2 Emissions (in grams): 0.0044
## Validation Metrics
- Loss: 0.256
- Accuracy: 0.913
- Precision: 0.736
- Recall: 0.830
- AUC: 0.956
- F1: 0.780
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/hr-elrond/autotrain-consumer-nature-speech_finbert-2147169289
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("hr-elrond/autotrain-consumer-nature-speech_finbert-2147169289", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("hr-elrond/autotrain-consumer-nature-speech_finbert-2147169289", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
htermotto/distilbert-base-uncased-finetuned-sngp-squad-seed-999 | htermotto | 2022-12-01T08:30:08Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-12-01T05:08:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-sngp-squad-seed-999
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sngp-squad-seed-999
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4527 | 1.0 | 8248 | 2.0711 |
| 2.1703 | 2.0 | 16496 | 1.9622 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ravinduj/finetuning-sentiment-model-3000-samples | ravinduj | 2022-12-01T08:21:50Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T10:38:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8533333333333334
- name: F1
type: f1
value: 0.8543046357615894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3489
- Accuracy: 0.8533
- F1: 0.8543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-kitchen_and_dining-3-16-5 | fathyshalab | 2022-12-01T08:14:12Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:19:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-kitchen_and_dining-3-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-kitchen_and_dining-3-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3560
- Accuracy: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7421 | 1.0 | 1 | 2.5878 | 0.2012 |
| 2.1065 | 2.0 | 2 | 2.4975 | 0.2012 |
| 1.5994 | 3.0 | 3 | 2.4274 | 0.2249 |
| 1.1739 | 4.0 | 4 | 2.3808 | 0.2456 |
| 1.083 | 5.0 | 5 | 2.3560 | 0.2692 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
pig4431/YELP_ALBERT_5E | pig4431 | 2022-12-01T08:07:20Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T07:33:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: YELP_ALBERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: train
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.9733333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# YELP_ALBERT_5E
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1394
- Accuracy: 0.9733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4967 | 0.03 | 50 | 0.1667 | 0.9467 |
| 0.3268 | 0.06 | 100 | 0.2106 | 0.9133 |
| 0.3413 | 0.1 | 150 | 0.2107 | 0.9667 |
| 0.3172 | 0.13 | 200 | 0.1906 | 0.94 |
| 0.2804 | 0.16 | 250 | 0.2588 | 0.9 |
| 0.2604 | 0.19 | 300 | 0.2023 | 0.94 |
| 0.2532 | 0.22 | 350 | 0.1263 | 0.9533 |
| 0.2103 | 0.26 | 400 | 0.1233 | 0.96 |
| 0.212 | 0.29 | 450 | 0.2019 | 0.9267 |
| 0.2669 | 0.32 | 500 | 0.1110 | 0.9667 |
| 0.2187 | 0.35 | 550 | 0.1542 | 0.96 |
| 0.2203 | 0.38 | 600 | 0.0879 | 0.9733 |
| 0.2699 | 0.42 | 650 | 0.0971 | 0.9667 |
| 0.2107 | 0.45 | 700 | 0.0863 | 0.9667 |
| 0.2443 | 0.48 | 750 | 0.0823 | 0.9733 |
| 0.1987 | 0.51 | 800 | 0.1207 | 0.9733 |
| 0.2326 | 0.54 | 850 | 0.1368 | 0.9667 |
| 0.1787 | 0.58 | 900 | 0.1027 | 0.9667 |
| 0.2159 | 0.61 | 950 | 0.2443 | 0.9333 |
| 0.1316 | 0.64 | 1000 | 0.2035 | 0.9467 |
| 0.2416 | 0.67 | 1050 | 0.0882 | 0.9733 |
| 0.2008 | 0.7 | 1100 | 0.1709 | 0.9533 |
| 0.2065 | 0.74 | 1150 | 0.1098 | 0.9667 |
| 0.2391 | 0.77 | 1200 | 0.1055 | 0.9667 |
| 0.1533 | 0.8 | 1250 | 0.1997 | 0.94 |
| 0.2016 | 0.83 | 1300 | 0.0899 | 0.96 |
| 0.2016 | 0.86 | 1350 | 0.0957 | 0.9733 |
| 0.2316 | 0.9 | 1400 | 0.0784 | 0.98 |
| 0.1839 | 0.93 | 1450 | 0.0784 | 0.9733 |
| 0.2121 | 0.96 | 1500 | 0.1150 | 0.9733 |
| 0.1307 | 0.99 | 1550 | 0.0969 | 0.9733 |
| 0.1271 | 1.02 | 1600 | 0.2326 | 0.9467 |
| 0.1736 | 1.06 | 1650 | 0.0979 | 0.9667 |
| 0.1357 | 1.09 | 1700 | 0.0862 | 0.98 |
| 0.1871 | 1.12 | 1750 | 0.1419 | 0.9667 |
| 0.1411 | 1.15 | 1800 | 0.1301 | 0.96 |
| 0.1317 | 1.18 | 1850 | 0.1602 | 0.9533 |
| 0.1432 | 1.22 | 1900 | 0.1885 | 0.9533 |
| 0.1793 | 1.25 | 1950 | 0.0776 | 0.9667 |
| 0.1322 | 1.28 | 2000 | 0.0822 | 0.9733 |
| 0.1416 | 1.31 | 2050 | 0.0920 | 0.9733 |
| 0.1524 | 1.34 | 2100 | 0.0673 | 0.98 |
| 0.1338 | 1.38 | 2150 | 0.0602 | 0.98 |
| 0.152 | 1.41 | 2200 | 0.0916 | 0.98 |
| 0.1192 | 1.44 | 2250 | 0.0559 | 0.98 |
| 0.1471 | 1.47 | 2300 | 0.1096 | 0.9667 |
| 0.1267 | 1.5 | 2350 | 0.0695 | 0.9733 |
| 0.1776 | 1.54 | 2400 | 0.1363 | 0.96 |
| 0.1495 | 1.57 | 2450 | 0.0818 | 0.98 |
| 0.1158 | 1.6 | 2500 | 0.1282 | 0.9667 |
| 0.1772 | 1.63 | 2550 | 0.0682 | 0.9733 |
| 0.1187 | 1.66 | 2600 | 0.1032 | 0.9733 |
| 0.136 | 1.7 | 2650 | 0.1071 | 0.9667 |
| 0.1829 | 1.73 | 2700 | 0.0753 | 0.9667 |
| 0.1147 | 1.76 | 2750 | 0.1071 | 0.9733 |
| 0.1174 | 1.79 | 2800 | 0.1441 | 0.9667 |
| 0.0707 | 1.82 | 2850 | 0.1362 | 0.9667 |
| 0.1372 | 1.86 | 2900 | 0.1861 | 0.9533 |
| 0.2108 | 1.89 | 2950 | 0.0770 | 0.9733 |
| 0.2014 | 1.92 | 3000 | 0.1114 | 0.9667 |
| 0.1373 | 1.95 | 3050 | 0.1244 | 0.9667 |
| 0.1242 | 1.98 | 3100 | 0.1220 | 0.96 |
| 0.1267 | 2.02 | 3150 | 0.1139 | 0.9733 |
| 0.1021 | 2.05 | 3200 | 0.2013 | 0.9533 |
| 0.1091 | 2.08 | 3250 | 0.1027 | 0.9733 |
| 0.0648 | 2.11 | 3300 | 0.1464 | 0.9733 |
| 0.1207 | 2.14 | 3350 | 0.1255 | 0.9733 |
| 0.0833 | 2.18 | 3400 | 0.0708 | 0.98 |
| 0.0796 | 2.21 | 3450 | 0.1608 | 0.96 |
| 0.0624 | 2.24 | 3500 | 0.0827 | 0.98 |
| 0.0518 | 2.27 | 3550 | 0.0602 | 0.98 |
| 0.1242 | 2.3 | 3600 | 0.0752 | 0.9733 |
| 0.0422 | 2.34 | 3650 | 0.1000 | 0.9733 |
| 0.0748 | 2.37 | 3700 | 0.1171 | 0.9667 |
| 0.0839 | 2.4 | 3750 | 0.1341 | 0.9667 |
| 0.1033 | 2.43 | 3800 | 0.0744 | 0.98 |
| 0.0567 | 2.46 | 3850 | 0.0869 | 0.98 |
| 0.0756 | 2.5 | 3900 | 0.0745 | 0.98 |
| 0.0768 | 2.53 | 3950 | 0.0895 | 0.9733 |
| 0.0878 | 2.56 | 4000 | 0.0703 | 0.98 |
| 0.1023 | 2.59 | 4050 | 0.0806 | 0.98 |
| 0.0807 | 2.62 | 4100 | 0.0338 | 0.9867 |
| 0.0868 | 2.66 | 4150 | 0.0892 | 0.9667 |
| 0.0648 | 2.69 | 4200 | 0.1637 | 0.9533 |
| 0.0535 | 2.72 | 4250 | 0.1622 | 0.9667 |
| 0.0675 | 2.75 | 4300 | 0.1354 | 0.9733 |
| 0.1121 | 2.78 | 4350 | 0.1440 | 0.9533 |
| 0.0714 | 2.82 | 4400 | 0.1022 | 0.9467 |
| 0.0786 | 2.85 | 4450 | 0.1110 | 0.9733 |
| 0.0822 | 2.88 | 4500 | 0.1218 | 0.9733 |
| 0.1075 | 2.91 | 4550 | 0.1041 | 0.9733 |
| 0.0783 | 2.94 | 4600 | 0.0992 | 0.9733 |
| 0.1059 | 2.98 | 4650 | 0.1187 | 0.9733 |
| 0.067 | 3.01 | 4700 | 0.0931 | 0.9733 |
| 0.0425 | 3.04 | 4750 | 0.1252 | 0.9733 |
| 0.0539 | 3.07 | 4800 | 0.1152 | 0.9733 |
| 0.0419 | 3.1 | 4850 | 0.1534 | 0.9667 |
| 0.0462 | 3.13 | 4900 | 0.1398 | 0.9733 |
| 0.0435 | 3.17 | 4950 | 0.1168 | 0.98 |
| 0.0144 | 3.2 | 5000 | 0.1489 | 0.9667 |
| 0.0367 | 3.23 | 5050 | 0.1293 | 0.9733 |
| 0.0336 | 3.26 | 5100 | 0.1353 | 0.9733 |
| 0.0246 | 3.29 | 5150 | 0.0958 | 0.98 |
| 0.0181 | 3.33 | 5200 | 0.1294 | 0.9733 |
| 0.0357 | 3.36 | 5250 | 0.1209 | 0.9733 |
| 0.0683 | 3.39 | 5300 | 0.1748 | 0.96 |
| 0.0353 | 3.42 | 5350 | 0.2159 | 0.9533 |
| 0.0415 | 3.45 | 5400 | 0.1723 | 0.96 |
| 0.0336 | 3.49 | 5450 | 0.1031 | 0.98 |
| 0.0475 | 3.52 | 5500 | 0.0959 | 0.98 |
| 0.0393 | 3.55 | 5550 | 0.2163 | 0.96 |
| 0.0337 | 3.58 | 5600 | 0.1097 | 0.9733 |
| 0.0415 | 3.61 | 5650 | 0.1365 | 0.98 |
| 0.035 | 3.65 | 5700 | 0.1175 | 0.98 |
| 0.0448 | 3.68 | 5750 | 0.1543 | 0.9667 |
| 0.0445 | 3.71 | 5800 | 0.2005 | 0.96 |
| 0.0211 | 3.74 | 5850 | 0.1179 | 0.98 |
| 0.0198 | 3.77 | 5900 | 0.1298 | 0.9733 |
| 0.026 | 3.81 | 5950 | 0.2167 | 0.9667 |
| 0.0412 | 3.84 | 6000 | 0.1224 | 0.98 |
| 0.0446 | 3.87 | 6050 | 0.0798 | 0.98 |
| 0.0174 | 3.9 | 6100 | 0.0577 | 0.9933 |
| 0.0535 | 3.93 | 6150 | 0.1482 | 0.9667 |
| 0.0495 | 3.97 | 6200 | 0.0862 | 0.98 |
| 0.0267 | 4.0 | 6250 | 0.1190 | 0.98 |
| 0.0087 | 4.03 | 6300 | 0.0747 | 0.98 |
| 0.0102 | 4.06 | 6350 | 0.0753 | 0.9867 |
| 0.0178 | 4.09 | 6400 | 0.1812 | 0.9667 |
| 0.0088 | 4.13 | 6450 | 0.0817 | 0.98 |
| 0.0144 | 4.16 | 6500 | 0.0805 | 0.98 |
| 0.014 | 4.19 | 6550 | 0.0862 | 0.9867 |
| 0.0002 | 4.22 | 6600 | 0.0894 | 0.98 |
| 0.0112 | 4.25 | 6650 | 0.1004 | 0.9733 |
| 0.0054 | 4.29 | 6700 | 0.0832 | 0.9867 |
| 0.0001 | 4.32 | 6750 | 0.0812 | 0.9867 |
| 0.0202 | 4.35 | 6800 | 0.1828 | 0.9667 |
| 0.009 | 4.38 | 6850 | 0.1114 | 0.98 |
| 0.0001 | 4.41 | 6900 | 0.1295 | 0.98 |
| 0.0077 | 4.45 | 6950 | 0.1610 | 0.9733 |
| 0.0082 | 4.48 | 7000 | 0.1787 | 0.9667 |
| 0.0198 | 4.51 | 7050 | 0.1485 | 0.9733 |
| 0.0017 | 4.54 | 7100 | 0.1774 | 0.9733 |
| 0.0115 | 4.57 | 7150 | 0.1567 | 0.9733 |
| 0.0001 | 4.61 | 7200 | 0.1534 | 0.9733 |
| 0.0247 | 4.64 | 7250 | 0.2020 | 0.9667 |
| 0.0059 | 4.67 | 7300 | 0.1918 | 0.9667 |
| 0.0052 | 4.7 | 7350 | 0.1315 | 0.98 |
| 0.0076 | 4.73 | 7400 | 0.1289 | 0.98 |
| 0.0218 | 4.77 | 7450 | 0.1610 | 0.9733 |
| 0.0077 | 4.8 | 7500 | 0.1355 | 0.98 |
| 0.0096 | 4.83 | 7550 | 0.1378 | 0.9733 |
| 0.008 | 4.86 | 7600 | 0.1568 | 0.9733 |
| 0.0103 | 4.89 | 7650 | 0.1388 | 0.9733 |
| 0.0009 | 4.93 | 7700 | 0.1221 | 0.98 |
| 0.0287 | 4.96 | 7750 | 0.1448 | 0.9733 |
| 0.01 | 4.99 | 7800 | 0.1394 | 0.9733 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
srnsrn120/whisper-small-hi | srnsrn120 | 2022-12-01T07:24:42Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-01T05:57:41Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - srnsrn120
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 40.772877338525355
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - srnsrn120
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3428
- Wer: 40.7729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2442 | 0.98 | 400 | 0.3428 | 40.7729 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
minhhoque/segformer-b0-scene-parse-150 | minhhoque | 2022-12-01T06:31:02Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-12-01T05:42:03Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Evelyn18/roberta-base-spanish-squades-becasv3 | Evelyn18 | 2022-12-01T06:27:03Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-19T13:20:41Z | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-becasv3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-modelo-robertav3
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 11
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 1.7032 |
| No log | 2.0 | 10 | 1.6939 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1 |
fanpu/model_output_original_subreddit-cmu_1 | fanpu | 2022-12-01T05:40:32Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-01T05:04:04Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: model_output_original_subreddit-cmu_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_output_original_subreddit-cmu_1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
dicquiloan/q-FrozenLake-v1-4x4-noSlippery | dicquiloan | 2022-12-01T05:11:21Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-25T23:37:21Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dicquiloan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
minhhoque/distilbert-base-uncased_imdb_reviews | minhhoque | 2022-12-01T04:56:58Z | 118 | 1 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T02:21:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased_imdb_reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_imdb_reviews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.385 | 0.4 | 500 | 0.3796 |
| 0.2803 | 0.8 | 1000 | 0.2549 |
| 0.208 | 1.2 | 1500 | 0.3218 |
| 0.1655 | 1.6 | 2000 | 0.2577 |
| 0.153 | 2.0 | 2500 | 0.2718 |
| 0.0552 | 2.4 | 3000 | 0.3514 |
| 0.0667 | 2.8 | 3500 | 0.3427 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-kitchen_and_dining-1-16-5 | fathyshalab | 2022-12-01T04:45:46Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:15:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-kitchen_and_dining-1-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-kitchen_and_dining-1-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3560
- Accuracy: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7421 | 1.0 | 1 | 2.5878 | 0.2012 |
| 2.1065 | 2.0 | 2 | 2.4975 | 0.2012 |
| 1.5994 | 3.0 | 3 | 2.4274 | 0.2249 |
| 1.1739 | 4.0 | 4 | 2.3808 | 0.2456 |
| 1.083 | 5.0 | 5 | 2.3560 | 0.2692 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-credit_cards-8-16-5 | fathyshalab | 2022-12-01T03:58:24Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:12:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-credit_cards-8-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-credit_cards-8-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3376
- Accuracy: 0.3186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.75 | 1.0 | 1 | 2.5769 | 0.2389 |
| 2.178 | 2.0 | 2 | 2.4879 | 0.2389 |
| 1.769 | 3.0 | 3 | 2.4180 | 0.2566 |
| 1.4703 | 4.0 | 4 | 2.3657 | 0.3097 |
| 1.2711 | 5.0 | 5 | 2.3376 | 0.3186 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DLL888/roberta-base-squad | DLL888 | 2022-12-01T03:55:20Z | 63 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-01T03:24:46Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: DLL888/roberta-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DLL888/roberta-base-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7054
- Train End Logits Accuracy: 0.8022
- Train Start Logits Accuracy: 0.7586
- Validation Loss: 0.8224
- Validation End Logits Accuracy: 0.7692
- Validation Start Logits Accuracy: 0.7402
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10570, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 500, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: mixed_float16
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.1613 | 0.7038 | 0.6632 | 0.8676 | 0.7626 | 0.7342 | 0 |
| 0.7054 | 0.8022 | 0.7586 | 0.8224 | 0.7692 | 0.7402 | 1 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Roman1998/tesorflowTest | Roman1998 | 2022-12-01T03:48:43Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T03:47:33Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tesorflowTest
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tesorflowTest
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1220
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.2863 | 0 |
| 0.1671 | 1 |
| 0.1220 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Zengwei/icefall-asr-librispeech-pruned-transducer-stateless7-ctc-2022-12-01 | Zengwei | 2022-12-01T03:29:09Z | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | 2022-12-01T02:01:38Z | This repo contains pre-trained models, checkpoints, training logs and decoding results of the following pull-request:
https://github.com/k2-fsa/icefall/pull/683
|
huggingtweets/prezoh | huggingtweets | 2022-12-01T03:28:19Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/prezoh/1669865295720/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590487732387733505/JiMBIJrZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">prezoh</div>
<div style="text-align: center; font-size: 14px;">@prezoh</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from prezoh.
| Data | prezoh |
| --- | --- |
| Tweets downloaded | 3158 |
| Retweets | 30 |
| Short tweets | 905 |
| Tweets kept | 2223 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/278h7rp5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @prezoh's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3e7ukxmi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3e7ukxmi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/prezoh')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Rastadayon/wav2vec2-large-xls-r-300m-dutch-colab | Rastadayon | 2022-12-01T03:20:45Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-30T20:59:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-dutch-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-dutch-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5834
- eval_wer: 0.3471
- eval_cer: 0.1181
- eval_runtime: 338.6313
- eval_samples_per_second: 14.582
- eval_steps_per_second: 1.825
- epoch: 14.87
- step: 4000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
itisphilippe/StackOverflowNER | itisphilippe | 2022-12-01T02:53:38Z | 0 | 1 | null | [
"license:mit",
"region:us"
] | null | 2022-11-30T07:01:36Z | ---
license: mit
---
Models and other data for https://github.com/jeniyat/StackOverflowNER. Use `git lfs fetch --all` to download all files.
Please note that folders are stored decompressed due to HuggingFace file size limitations.
The individual files in ./data_ctc/ are compressed using `gzip`, and can be decompressed using `gunzip -d *.gz`.
Intermediate model checkpoints have not been uploaded due to bandwidth limitations.
**BibTeX entry and citation info**
```bibtex
@inproceedings{Tabassum20acl,
title = {Code and Named Entity Recognition in StackOverflow},
author = "Tabassum, Jeniya and Maddela, Mounica and Xu, Wei and Ritter, Alan",
booktitle = {The Annual Meeting of the Association for Computational Linguistics (ACL)},
year = {2020}
}
``` |
fathyshalab/all-roberta-large-v1-credit_cards-5-16-5 | fathyshalab | 2022-12-01T02:47:27Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:07:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-credit_cards-5-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-credit_cards-5-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3376
- Accuracy: 0.3186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.75 | 1.0 | 1 | 2.5769 | 0.2389 |
| 2.178 | 2.0 | 2 | 2.4879 | 0.2389 |
| 1.769 | 3.0 | 3 | 2.4180 | 0.2566 |
| 1.4703 | 4.0 | 4 | 2.3657 | 0.3097 |
| 1.2711 | 5.0 | 5 | 2.3376 | 0.3186 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Yanjie24/t5-samsung | Yanjie24 | 2022-12-01T02:31:14Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-01T02:09:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: t5-samsung
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: train
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 42.2345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-samsung
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8153
- Rouge1: 42.2345
- Rouge2: 18.983
- Rougel: 33.0073
- Rougelsum: 38.8755
- Gen Len: 36.4242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.0028 | 1.0 | 1841 | 1.8153 | 42.2345 | 18.983 | 33.0073 | 38.8755 | 36.4242 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
SathEdu/distilbert-base-uncased-finetuned-emotion | SathEdu | 2022-12-01T02:15:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-19T07:30:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9256889016417648
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2222
- Accuracy: 0.9255
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7962 | 1.0 | 250 | 0.3167 | 0.903 | 0.8984 |
| 0.2475 | 2.0 | 500 | 0.2222 | 0.9255 | 0.9257 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
cardiffnlp/bert-base-multilingual-cased-sentiment-multilingual | cardiffnlp | 2022-12-01T02:11:30Z | 114 | 4 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"dataset:cardiffnlp/tweet_sentiment_multilingual",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-01T02:07:08Z | ---
datasets:
- cardiffnlp/tweet_sentiment_multilingual
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/bert-base-multilingual-cased-sentiment-multilingual
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_sentiment_multilingual
type: all
split: test
metrics:
- name: Micro F1 (cardiffnlp/tweet_sentiment_multilingual/all)
type: micro_f1_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.6169540229885058
- name: Macro F1 (cardiffnlp/tweet_sentiment_multilingual/all)
type: micro_f1_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.6168385894019698
- name: Accuracy (cardiffnlp/tweet_sentiment_multilingual/all)
type: accuracy_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.6169540229885058
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/bert-base-multilingual-cased-sentiment-multilingual
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the
[`cardiffnlp/tweet_sentiment_multilingual (all)`](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train` and parameters have been tuned on the validation split `validation`.
Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/bert-base-multilingual-cased-sentiment-multilingual/raw/main/metric.json)).
- F1 (micro): 0.6169540229885058
- F1 (macro): 0.6168385894019698
- Accuracy: 0.6169540229885058
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/bert-base-multilingual-cased-sentiment-multilingual", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
```
|
fathyshalab/all-roberta-large-v1-credit_cards-3-16-5 | fathyshalab | 2022-12-01T01:59:23Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T18:04:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-credit_cards-3-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-credit_cards-3-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3376
- Accuracy: 0.3186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.75 | 1.0 | 1 | 2.5769 | 0.2389 |
| 2.178 | 2.0 | 2 | 2.4879 | 0.2389 |
| 1.769 | 3.0 | 3 | 2.4180 | 0.2566 |
| 1.4703 | 4.0 | 4 | 2.3657 | 0.3097 |
| 1.2711 | 5.0 | 5 | 2.3376 | 0.3186 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DiogoSabec/BOT | DiogoSabec | 2022-12-01T01:33:17Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-12-01T00:40:43Z | ---
tags:
- conversational
---
|
sd-dreambooth-library/crisimsestelle | sd-dreambooth-library | 2022-12-01T01:20:13Z | 52 | 0 | diffusers | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-11-29T16:50:18Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Contain Real Ingredients on Stable Diffusion 2 via Dreambooth
#### model by estelleflores

This is a Stable Diffusion 2 model fine-tuned to the CRIsimsEstelle concept taught to Stable Diffusion with Dreambooth.

It can be used by modifying the `instance_prompt`: **3d render in \<cri-sims> style** or just using the initializer \'\<cri-sims> style' somewhere in your prompt will work.

Images used for training this concept come from the [project Contain Real Ingredients](https://teia.art/estelle), an art practice inside the game The Sims 4 by artist Estelle Flores:




















You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) |
wmFrank/sample-factory-2-megaverse | wmFrank | 2022-12-01T00:50:17Z | 1 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-01T00:49:58Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: TowerBuilding
type: TowerBuilding
metrics:
- type: mean_reward
value: nan +/- nan
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **TowerBuilding** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
fathyshalab/all-roberta-large-v1-banking-9-16-5 | fathyshalab | 2022-12-01T00:47:58Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T18:53:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-9-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-9-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2920
- Accuracy: 0.3982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7211 | 1.0 | 1 | 2.5748 | 0.2301 |
| 2.2722 | 2.0 | 2 | 2.4566 | 0.3009 |
| 1.9185 | 3.0 | 3 | 2.3596 | 0.3805 |
| 1.667 | 4.0 | 4 | 2.2920 | 0.3982 |
| 1.4704 | 5.0 | 5 | 2.2565 | 0.3982 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-banking-8-16-5 | fathyshalab | 2022-12-01T00:21:13Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T18:30:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-8-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-8-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2920
- Accuracy: 0.3982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7211 | 1.0 | 1 | 2.5748 | 0.2301 |
| 2.2722 | 2.0 | 2 | 2.4566 | 0.3009 |
| 1.9185 | 3.0 | 3 | 2.3596 | 0.3805 |
| 1.667 | 4.0 | 4 | 2.2920 | 0.3982 |
| 1.4704 | 5.0 | 5 | 2.2565 | 0.3982 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Taqwa/whisper-small-hi | Taqwa | 2022-12-01T00:05:15Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-26T20:53:48Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 35.74028612545501
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [Taqwa/whisper-small-hiTaqwa](https://huggingface.co/Taqwa/whisper-small-hiTaqwa) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3353
- Wer: 35.7403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0762 | 0.31 | 125 | 0.2818 | 33.3573 |
| 0.0653 | 0.61 | 250 | 0.2930 | 33.9584 |
| 0.062 | 0.92 | 375 | 0.3060 | 34.7456 |
| 0.0518 | 1.22 | 500 | 0.3353 | 35.7403 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/all-roberta-large-v1-banking-6-16-5 | fathyshalab | 2022-11-30T23:26:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T17:44:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-6-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-6-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2920
- Accuracy: 0.3982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7211 | 1.0 | 1 | 2.5748 | 0.2301 |
| 2.2722 | 2.0 | 2 | 2.4566 | 0.3009 |
| 1.9185 | 3.0 | 3 | 2.3596 | 0.3805 |
| 1.667 | 4.0 | 4 | 2.2920 | 0.3982 |
| 1.4704 | 5.0 | 5 | 2.2565 | 0.3982 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CarperAI/randomwalks | CarperAI | 2022-11-30T22:22:26Z | 164 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-10-28T17:23:14Z | ---
license: mit
---
This is a pretrained model used in [PPO toy example](https://github.com/CarperAI/trlx/tree/main/examples/randomwalks) from [CarperAI/trlX](https://github.com/CarperAI/trlx/tree/main/examples/randomwalks) |
deblagoj/distilbert-base-uncased-finetuned-emotion | deblagoj | 2022-11-30T22:05:20Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-07T18:26:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.919
- name: F1
type: f1
value: 0.9190903538852266
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2225
- Accuracy: 0.919
- F1: 0.9191
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.814 | 1.0 | 250 | 0.3153 | 0.904 | 0.9016 |
| 0.2515 | 2.0 | 500 | 0.2225 | 0.919 | 0.9191 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu116
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/poisonjr | huggingtweets | 2022-11-30T21:50:40Z | 119 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-30T21:49:04Z | ---
language: en
thumbnail: http://www.huggingtweets.com/poisonjr/1669845035713/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1582446449228382209/8JRLlVu__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">gale na</div>
<div style="text-align: center; font-size: 14px;">@poisonjr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from gale na.
| Data | gale na |
| --- | --- |
| Tweets downloaded | 3204 |
| Retweets | 731 |
| Short tweets | 782 |
| Tweets kept | 1691 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/33t9oiqy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @poisonjr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3c5vn57r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3c5vn57r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/poisonjr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
manirai91/enlm-roberta-final | manirai91 | 2022-11-30T21:40:33Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-28T03:41:11Z | ---
tags:
- generated_from_trainer
model-index:
- name: enlm-roberta-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlm-roberta-final
This model is a fine-tuned version of [manirai91/enlm-roberta](https://huggingface.co/manirai91/enlm-roberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 128
- total_train_batch_size: 8192
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: polynomial
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5245 | 0.34 | 160 | 1.4187 |
| 1.5245 | 0.69 | 320 | 1.4183 |
| 1.5259 | 1.03 | 480 | 1.4177 |
| 1.5265 | 1.37 | 640 | 1.4185 |
| 1.5245 | 1.72 | 800 | 1.4190 |
| 1.5241 | 2.06 | 960 | 1.4172 |
| 1.5227 | 2.4 | 1120 | 1.4165 |
| 1.5226 | 2.75 | 1280 | 1.4152 |
| 1.522 | 3.09 | 1440 | 1.4190 |
| 1.5243 | 3.43 | 1600 | 1.4177 |
| 1.5213 | 3.78 | 1760 | 1.4134 |
| 1.524 | 4.12 | 1920 | 1.4140 |
| 1.5223 | 4.46 | 2080 | 1.4173 |
| 1.5236 | 4.81 | 2240 | 1.4121 |
| 1.5239 | 5.15 | 2400 | 1.4186 |
| 1.5203 | 5.49 | 2560 | 1.4154 |
| 1.522 | 5.84 | 2720 | 1.4162 |
| 1.5209 | 6.18 | 2880 | 1.4154 |
| 1.5196 | 6.52 | 3040 | 1.4153 |
| 1.5209 | 6.87 | 3200 | 1.4122 |
| 1.5202 | 7.21 | 3360 | 1.4146 |
| 1.5192 | 7.55 | 3520 | 1.4141 |
| 1.5215 | 7.9 | 3680 | 1.4123 |
| 1.5228 | 8.24 | 3840 | 1.4147 |
| 1.5222 | 8.58 | 4000 | 1.4144 |
| 1.5201 | 8.93 | 4160 | 1.4173 |
| 1.523 | 9.27 | 4320 | 1.4171 |
| 1.5212 | 9.61 | 4480 | 1.4149 |
| 1.522 | 9.96 | 4640 | 1.4187 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/blewglass | huggingtweets | 2022-11-30T21:38:03Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-30T21:36:41Z | ---
language: en
thumbnail: http://www.huggingtweets.com/blewglass/1669844278462/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1589805873366724610/ifGVL-6g_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">come back clammy</div>
<div style="text-align: center; font-size: 14px;">@blewglass</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from come back clammy.
| Data | come back clammy |
| --- | --- |
| Tweets downloaded | 3174 |
| Retweets | 582 |
| Short tweets | 317 |
| Tweets kept | 2275 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cybl684/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @blewglass's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zifv54gk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zifv54gk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/blewglass')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
danielsaggau/scotus_py | danielsaggau | 2022-11-30T21:12:28Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"longformer",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-30T21:12:16Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 970 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 970,
"warmup_steps": 97,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 4096, 'do_lower_case': False}) with Transformer model: LongformerModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
fathyshalab/all-roberta-large-v1-banking-1-16-5 | fathyshalab | 2022-11-30T21:09:17Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T15:45:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-1-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-1-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4479
- Accuracy: 0.2301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.716 | 1.0 | 1 | 2.6641 | 0.1327 |
| 2.1674 | 2.0 | 2 | 2.5852 | 0.1858 |
| 1.7169 | 3.0 | 3 | 2.5202 | 0.2035 |
| 1.3976 | 4.0 | 4 | 2.4729 | 0.2124 |
| 1.2503 | 5.0 | 5 | 2.4479 | 0.2301 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gavin124/gpt2-finetuned-cnn-summarization-v1 | gavin124 | 2022-11-30T20:40:22Z | 80 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"summarization",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-11-30T15:33:05Z | ---
license: mit
tags:
- summarization
- generated_from_trainer
model-index:
- name: gpt2-finetuned-cnn-summarization-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-cnn-summarization-v1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2025 | 1.0 | 5742 | 2.1636 |
| 2.0428 | 2.0 | 11484 | 2.1659 |
| 1.9681 | 3.0 | 17226 | 2.1709 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pere/whisper-medium-NST-uf-linlr | pere | 2022-11-30T19:24:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"NbAiLab/NST",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-28T07:44:59Z | ---
license: apache-2.0
tags:
- hf-asr-leaderboard
- automatic-speech-recognition
- NbAiLab/NST
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-medium-NST-uf-linlr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-NST-uf-linlr
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the NBAILAB/NST - NO-CLOSE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3007
- Wer: 9.1220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 72
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 20000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.2046 | 0.05 | 1000 | 0.3426 | 15.2794 |
| 0.148 | 0.1 | 2000 | 0.3284 | 10.8324 |
| 0.121 | 0.15 | 3000 | 0.3092 | 12.8848 |
| 0.1089 | 0.2 | 4000 | 0.2808 | 10.4903 |
| 0.0976 | 0.25 | 5000 | 0.2617 | 9.9202 |
| 0.0901 | 0.3 | 6000 | 0.2604 | 21.8928 |
| 0.0834 | 0.35 | 7000 | 0.2877 | 9.3501 |
| 0.0825 | 0.4 | 8000 | 0.2794 | 9.3501 |
| 0.0553 | 1.05 | 9000 | 0.2845 | 9.5781 |
| 0.0472 | 1.1 | 10000 | 0.2814 | 24.1733 |
| 0.0409 | 1.15 | 11000 | 0.3084 | 8.0958 |
| 0.041 | 1.2 | 12000 | 0.2865 | 9.2360 |
| 0.0353 | 1.25 | 13000 | 0.2828 | 6.4994 |
| 0.0348 | 1.3 | 14000 | 0.2708 | 7.5257 |
| 0.0349 | 1.35 | 15000 | 0.2842 | 23.0331 |
| 0.0361 | 1.4 | 16000 | 0.2769 | 10.1482 |
| 0.0249 | 2.04 | 17000 | 0.2935 | 8.8940 |
| 0.0204 | 2.09 | 18000 | 0.2874 | 12.4287 |
| 0.0175 | 2.14 | 19000 | 0.2882 | 12.9989 |
| 0.0197 | 2.19 | 20000 | 0.3007 | 9.1220 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ShadoWxShinigamI/vray-render | ShadoWxShinigamI | 2022-11-30T19:05:09Z | 0 | 54 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-30T18:55:42Z | ---
license: creativeml-openrail-m
---
##Textual Inversion Embedding For SD 2.0 (768) by ShadoWxShinigamI
44 Images, 768x768, Batch Size 4, Gradient Accumulation 11, Vectors - 6, Steps 500
I love the V-Ray Render style, and wanted to try making an embed for a highly varied style. This is my attempt. It is definitely not perfect. It gives slightly soft outputs, I will revisit this embed once i get the hang of training efficiently.
In case of any errors when using this embedd with Auto1111, try out the png embed instead.
Examples:-






|
abdalrahmanshahrour/ShahrourDamageLenses | abdalrahmanshahrour | 2022-11-30T19:01:07Z | 0 | 0 | null | [
"region:us"
] | null | 2022-11-30T18:37:14Z | # Damage-detection

prj files:
## step 1: download all files
1. clone my repo
```python
git clone https://github.com/AbdelrahmanShahrour/Damage-detection.git
```
2. get data and models files from [here](https://drive.google.com/drive/folders/1vXaD8z2J_kbh8oDU4rNcyuPoXgjOSRKs?usp=sharing)

## step 2: creat venv and install all lib
```python
python3 -m venv env
```
```python
source env/bin/activate
```
```python
pip3 install -r requirements.txt
```
## step 3: open jupyter notebook
```python
jupyter notebook
```
## step 4: open `output.ipynb` and run all Cells



# step 5: enjoy and develop this project and share his with me 😁👍🏻
|
andrewzhang505/isaacgym_humanoid | andrewzhang505 | 2022-11-30T19:00:40Z | 9 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-30T01:40:53Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid
type: Humanoid
metrics:
- type: mean_reward
value: 8418.38 +/- 1855.54
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **Humanoid** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r andrewzhang505/isaacgym_humanoid
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.isaacgym_examples.enjoy_isaacgym --algo=APPO --env=Humanoid --train_dir=./train_dir --experiment=isaacgym_humanoid
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.isaacgym_examples.train_isaacgym --algo=APPO --env=Humanoid --train_dir=./train_dir --experiment=isaacgym_humanoid --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Crushtoe/GODEL-v1_1-base-seq2seq-vangluss | Crushtoe | 2022-11-30T18:59:59Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-30T17:39:03Z | ---
tags:
- conversational
---
# Vangluss: Bot Edition
Trying (and failing) to use GODEL in place of DialoGPT. |
htermotto/distilbert-base-uncased-finetuned-sngp-squad-seed-42 | htermotto | 2022-11-30T18:58:48Z | 33 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-11-30T10:31:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-sngp-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sngp-squad-seed-42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4521 | 1.0 | 8248 | 2.0439 |
| 2.1298 | 2.0 | 16496 | 1.9074 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
jmunoz/finetuning-sentiment-model-3000-samples | jmunoz | 2022-11-30T18:41:53Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-29T22:47:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 1.2.1
- Tokenizers 0.12.1
|
ximboleta/Glebbo | ximboleta | 2022-11-30T18:39:44Z | 0 | 0 | null | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2022-11-30T18:39:44Z | ---
license: cc-by-nc-nd-4.0
---
|
edgertej/poebert-checkpoint-finetuned-poetry-foundation-2 | edgertej | 2022-11-30T17:14:10Z | 78 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-30T16:14:34Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: edgertej/poebert-checkpoint-finetuned-poetry-foundation-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# edgertej/poebert-checkpoint-finetuned-poetry-foundation-2
This model is a fine-tuned version of [edgertej/poebert-checkpoint-finetuned-poetry-foundation](https://huggingface.co/edgertej/poebert-checkpoint-finetuned-poetry-foundation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8653
- Validation Loss: 3.5986
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.9003 | 3.6587 | 0 |
| 3.8970 | 3.6169 | 1 |
| 3.8653 | 3.5986 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.9.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
alexrofail/sd-class-butterflies-32 | alexrofail | 2022-11-30T16:31:22Z | 33 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T16:29:47Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
In this run I just ran each cell of the NB to understand what is going on.
Experimentation to follow 🙏
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(alexrofail/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
fathyshalab/all-roberta-large-v1-banking-17-16-5 | fathyshalab | 2022-11-30T15:28:05Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T21:57:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-17-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-17-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7470
- Accuracy: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8182 | 1.0 | 1 | 2.7709 | 0.0356 |
| 2.6751 | 2.0 | 2 | 2.7579 | 0.0578 |
| 2.5239 | 3.0 | 3 | 2.7509 | 0.0622 |
| 2.4346 | 4.0 | 4 | 2.7470 | 0.0756 |
| 2.4099 | 5.0 | 5 | 2.7452 | 0.0756 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-banking-16-16-5 | fathyshalab | 2022-11-30T15:24:44Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T21:34:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-16-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-16-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7470
- Accuracy: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8182 | 1.0 | 1 | 2.7709 | 0.0356 |
| 2.6751 | 2.0 | 2 | 2.7579 | 0.0578 |
| 2.5239 | 3.0 | 3 | 2.7509 | 0.0622 |
| 2.4346 | 4.0 | 4 | 2.7470 | 0.0756 |
| 2.4099 | 5.0 | 5 | 2.7452 | 0.0756 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fathyshalab/all-roberta-large-v1-banking-14-16-5 | fathyshalab | 2022-11-30T15:17:52Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-28T20:48:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-banking-14-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-14-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7470
- Accuracy: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8182 | 1.0 | 1 | 2.7709 | 0.0356 |
| 2.6751 | 2.0 | 2 | 2.7579 | 0.0578 |
| 2.5239 | 3.0 | 3 | 2.7509 | 0.0622 |
| 2.4346 | 4.0 | 4 | 2.7470 | 0.0756 |
| 2.4099 | 5.0 | 5 | 2.7452 | 0.0756 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gd1m3y/sentiment_bert | gd1m3y | 2022-11-30T15:04:50Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-30T14:20:13Z | ---
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- accuracy
model-index:
- name: sentiment_bert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
config: sentences_66agree
split: train
args: sentences_66agree
metrics:
- name: Accuracy
type: accuracy
value: 0.9360189573459715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_bert
This model is a fine-tuned version of [SALT-NLP/FLANG-BERT](https://huggingface.co/SALT-NLP/FLANG-BERT) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3754
- Accuracy: 0.9360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
tomekkorbak/compassionate_hypatia | tomekkorbak | 2022-11-30T14:23:57Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-11-29T19:22:43Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: compassionate_hypatia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# compassionate_hypatia
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.00065,
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'compassionate_hypatia',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/3kybxs99 |
yorko/sd-class-butterflies-32 | yorko | 2022-11-30T13:41:32Z | 35 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-11-30T13:30:35Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained("yorko/sd-class-butterflies-32")
image = pipeline().images[0]
image
```
|
nixmaverick1997/app-setfit-classifier | nixmaverick1997 | 2022-11-30T13:32:26Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-classifier",
"transformers",
"sentiment-classifier",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-10-31T16:11:57Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-classifier
- transformers
- sentiment-classifier
---
# SetFit Sentiment Classifier
This is a variant of the [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
Uses Siamese and triplet network structures to generate semantically meaningful sentence embeddings
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install setfit
```
Then you can use the model like this:
```python
from setfit import SetFitModel
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SetFitModel.from_pretrained("nixmaverick1997/app-setfit-classifier")
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("nixmaverick1997/app-setfit-classifier")
model = AutoModel.from_pretrained("nixmaverick1997/app-setfit-classifier")
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
Loss class = CosineSimilarityLoss
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 640 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 640,
"warmup_steps": 64,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Watwat100/256data | Watwat100 | 2022-11-30T13:00:52Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-30T13:00:38Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1576 with parameters:
```
{'batch_size': 13, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 4728,
"warmup_steps": 473,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
kejian/immaculate-filtering | kejian | 2022-11-30T12:11:34Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-11-29T15:12:15Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: immaculate-filtering
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# immaculate-filtering
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'filter_threshold': 0.002361,
'is_split_by_sentences': True},
'generation': {'batch_size': 128,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 512},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 512,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'immaculate-filtering',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0008,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/3jjalm0n |
GujjetiNagaraju/xlm-roberta-base-finetuned-Telugu_NLP | GujjetiNagaraju | 2022-11-30T12:10:20Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-30T11:05:48Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-Telugu_NLP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-Telugu_NLP
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4192 | 1.0 | 1250 | 2.1557 |
| 2.2859 | 2.0 | 2500 | 2.0632 |
| 2.2311 | 3.0 | 3750 | 2.0083 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
roscazo/DisTEMIST_fine_tuned_sentence | roscazo | 2022-11-30T11:30:15Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-23T09:51:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: DisTEMIST_fine_tuned_sentence
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DisTEMIST_fine_tuned_sentence
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2164
- Precision: 0.6069
- Recall: 0.6401
- F1: 0.6231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=2.6e-09
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 73
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|
| 0.1166 | 1.0 | 1099 | 0.1152 | 0.5214 | 0.6433 | 0.5760 |
| 0.0718 | 2.0 | 2198 | 0.1096 | 0.6015 | 0.6297 | 0.6153 |
| 0.0438 | 3.0 | 3297 | 0.1517 | 0.6573 | 0.5895 | 0.6215 |
| 0.0293 | 4.0 | 4396 | 0.1496 | 0.6212 | 0.6198 | 0.6205 |
| 0.0179 | 5.0 | 5495 | 0.1665 | 0.5670 | 0.6505 | 0.6059 |
| 0.0119 | 6.0 | 6594 | 0.1602 | 0.6035 | 0.6379 | 0.6202 |
| 0.0078 | 7.0 | 7693 | 0.1844 | 0.6008 | 0.6347 | 0.6173 |
| 0.0041 | 8.0 | 8792 | 0.2019 | 0.6006 | 0.6288 | 0.6144 |
| 0.0026 | 9.0 | 9891 | 0.2075 | 0.6015 | 0.6270 | 0.6140 |
| 0.0014 | 10.0 | 10990 | 0.2164 | 0.6069 | 0.6401 | 0.6231 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Subsets and Splits