modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-28 18:26:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 477
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-28 18:24:32
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
EmirhanExecute/ppo-LunarLander-try2 | EmirhanExecute | 2023-08-24T08:56:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-24T08:56:29Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.15 +/- 15.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bigmorning/train_from_raw_cv12__0020 | bigmorning | 2023-08-24T08:54:06Z | 60 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-24T08:53:58Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: train_from_raw_cv12__0020
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# train_from_raw_cv12__0020
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Train Accuracy: 0.0032
- Train Wermet: 8.3902
- Validation Loss: nan
- Validation Accuracy: 0.0032
- Validation Wermet: 8.3902
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| nan | 0.0032 | 8.3778 | nan | 0.0032 | 8.3902 | 0 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 1 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 2 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 3 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 4 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 5 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 6 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 7 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 8 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 9 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 10 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 11 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 12 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 13 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 14 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 15 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 16 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 17 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 18 |
| nan | 0.0032 | 8.3902 | nan | 0.0032 | 8.3902 | 19 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
nomsgadded/Translation | nomsgadded | 2023-08-24T08:52:26Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"en",
"fr",
"dataset:opus_books",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-24T08:13:12Z | ---
language:
- en
- fr
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus_books
model-index:
- name: Translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Translation
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books en-fr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
922-CA/negev-gfl-rvc2-tests | 922-CA | 2023-08-24T08:51:21Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-08-22T08:46:16Z | ---
license: openrail
---
Test RVC2 models on the GFL character Negev, via various hyperparams and datasets.
# negev-test-0 (~07/2023)
* Trained on dataset of ~30 items, dialogue from game
* Trained for ~100 epochs
* First attempt
# negev-test-1 - nne1_e10_s150 (08/22/2023)
* Trained on dataset of ~30 items, dialogue from game
* Trained for 10 epochs (150 steps)
* Less artifacting but with accent
# negev-test-1 - nne1_e60_s900 (08/22/2023)
* Trained on dataset of ~30 items, dialogue from game
* Trained for 60 epochs (900 steps)
* Tends to be clearer and with less accent
|
ashokdavas/ppo-LunarLander-v2 | ashokdavas | 2023-08-24T08:44:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-24T08:44:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.99 +/- 16.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
WhoTookMyAmogusNickname/ReasonixPajama-3B-GGML | WhoTookMyAmogusNickname | 2023-08-24T08:41:41Z | 0 | 1 | null | [
"region:us"
] | null | 2023-08-24T07:58:17Z | Amogus\
GGML quants of [ReasonixPajama-3b-HF](https://huggingface.co/Fredithefish/ReasonixPajama-3B-HF) |
IAMNawaf/QA-History-Saudi | IAMNawaf | 2023-08-24T08:30:58Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"ar",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-08-23T14:32:32Z | ---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: SA-History-NASEEJ-QA
results: []
language:
- ar
library_name: transformers
widget:
- text: ู
ู ูุงู ุงูุฃูุจุฑ ุณููุง ู
ู ุขู ุณุนูุฏ ูุชููู ุงูุฅู
ุงุฑุฉุ
context: >-
ุจุนูุฏ ูููุงุฉ ุณูุนูุฏ ุจูู ู
ุญู
ูุฏ ุจูู ู
ููุฑู ุชูููู ุงูุฅู
ูุงุฑุฉ ุฒููุฏ ุจูู ู
ุฑุฎูุงู ุจูู
ูุทุจูุงูุ ููุงู ุงูุฃูุจูุฑ ุณููุงู ู
ูู ุขู ุณูุนูุฏุ ููููู ุญูู
ูู ููู
ูู
ุชูุฏ ุทููู ูุง ููุจูุฑ
ุณูููุ ู
ู
ูุง ุฏุนูุง ู
ููุฑู ุจูู ู
ุญู
ูุฏ ุจูู ู
ููุฑู ุฅููู ุงูุชูุฒุงุน ุงูุฅู
ูุงุฑุฉ ู
ูููุ ูููู
ุงูุฃู
ููุฑ ููู
ุชุณูุชู
ุฑ ุทููู ูุง ูู
ููุฑูุ ูุฐููู ุนูุฏู
ูุง ุญูุงูู ุงูุบูุฏุฑ ุจุฒููุฏ ุจูู
ู
ุฑุฎูุงู ุงููุฐู ูุงู ูุญููู
ูุจูููุ ู
ู
ูุง ุฏุนูุง ู
ุญู
ูุฏ ุจูู ุณูุนูุฏ ูู
ููุฑู ุจูู ุนุจุฏุงูููู
ุฅููู ูุชูููุ ููุงู ุฐููู ุณููุฉ 1139 ููู 1727/ ู
.
ุจุนูุฏ ุฐููู ุนูุงุฏ ุฅููู ุงูุฅู
ูุงุฑุฉ ุฒููุฏ ุจูู ู
ุฑุฎูุงูุ ุฅูุง ุฃููู ุนูุฏู
ูุง ูุฌูู
ุนููู
ุฅู
ูุงุฑุฉ ุงูุนููููุฉ ุณูุนุช - ุจุนูุฏ ุฐููู - ุฅููู ุงูุชุญุงููู ุนูููู ูุทูุจูุช ุงูุชููุงูุถ ู
ุนููุ
ูุนูุฏู
ูุง ุฐููุจ ุชู
ูุชูููุ ูุจุนูุฏ ูุชูู ุฒููุฏ ุจูู ู
ุฑุฎูุงู ุชูููู ู
ุญู
ูุฏ ุจูู ุณูุนูุฏ ุจูู
ู
ููุฑู ุงูุฅู
ูุงุฑุฉ ููู ุงูุฏุฑุนููุฉ ุณููุฉ 1139 ููู 1727/ ู
ุ ูุธูู ุญูู
ูู ุญุชูู ุณููุฉ
1179 ูู 1765/ ู
.
example_title: ุชุงุฑูุฎ ุงูู
ู
ููุฉ ุงูุนุฑุจูุฉ ุงูุณุนูุฏูุฉ
pipeline_tag: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Naseej-SA-History-QA
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0791
## Model description
The Naseej-SA-History-QA model is a fine-tuned version of the aubmindlab/bert-base-arabertv02 pre-trained BERT model.
It has been tailored and optimized for question answering tasks related to the history of Saudi Arabia.
The model is designed to comprehend historical context and provide accurate answers to questions in Arabic language.
## Intended uses & limitations
The Naseej-SA-History-QA model is intended to be used for answering historical questions specifically related to the history of Saudi Arabia. It can be employed in educational and research settings to assist students, scholars, and researchers in obtaining information about Saudi Arabian history. The model can also be utilized in various NLP applications where historical context is a key factor, such as building educational platforms, historical archives, and language translation tools.
The model's performance is contingent upon the quality and accuracy of the training and evaluation data it has been fine-tuned on. It may struggle with questions that deviate significantly from the training data distribution.
The model's understanding of historical events and context is based on the data it has been trained on. It may not perform well on questions involving more recent or less documented historical events.
The model may not fully comprehend nuanced or highly specific historical inquiries that require deep contextual understanding beyond the scope of its training data.
## Training and evaluation data
The Naseej-SA-History-QA model was trained using a custom dataset comprising historical questions and corresponding context passages related to the history of Saudi Arabia. The dataset covers various historical periods and events, providing the model with a wide range of historical context to learn from.
The evaluation set used during training was designed to assess the model's performance on question answering tasks. The evaluation set includes a variety of questions and context passages that challenge the model's ability to accurately answer questions about Saudi Arabian history.
## Training procedure
The Naseej-SA-History-QA model was fine-tuned using the aubmindlab/bert-base-arabertv02 pre-trained BERT model. The training process involved several key steps:
Dataset Preparation: A custom dataset was curated for training the model. The dataset consisted of pairs of historical questions and corresponding context passages, both in Arabic language. The context passages provided the necessary historical context for answering the questions.
Tokenization: The dataset was tokenized using the Tokenizers library, which converts text into a format that the model can understand. Tokenization converts words and subwords into numerical tokens that the model can process.
Model Fine-Tuning: The tokenized dataset was used to fine-tune the aubmindlab/bert-base-arabertv02 base model using the Transformers library. During fine-tuning, the model was adjusted to perform well on the specific task of question answering related to Saudi Arabian history.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 11 | 4.9014 |
| No log | 2.0 | 22 | 4.7432 |
| No log | 3.0 | 33 | 4.6212 |
| No log | 4.0 | 44 | 4.6347 |
| No log | 5.0 | 55 | 4.6101 |
| No log | 6.0 | 66 | 4.6209 |
| No log | 7.0 | 77 | 4.6445 |
| No log | 8.0 | 88 | 4.6284 |
| No log | 9.0 | 99 | 4.6226 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3 |
nishant-glance/path-to-save-model-2-1-priorp-lowlr | nishant-glance | 2023-08-24T08:30:19Z | 2 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-24T07:40:57Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - nishant-glance/path-to-save-model-2-1-priorp-lowlr
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
lordhiew/myfirsttrain | lordhiew | 2023-08-24T08:25:44Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-28T07:25:03Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Stomper10/CXR_ti_nf | Stomper10 | 2023-08-24T08:22:49Z | 13 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-24T05:22:54Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Stomper10/CXR_ti_nf
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.




















|
raygx/distilGPT-NepSA | raygx | 2023-08-24T08:12:30Z | 71 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-13T04:59:50Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilGPT-NepSA
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilGPT-NepSA
This model is a fine-tuned version of [raygx/distilGPT-Nepali](https://huggingface.co/raygx/distilGPT-Nepali) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6068
- Validation Loss: 0.6592
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.04}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.8415 | 0.7254 | 0 |
| 0.6068 | 0.6592 | 1 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
amazon/sm-hackathon-actionability-9-multi-outputs-setfit-all-roberta-large-model-v0.1 | amazon | 2023-08-24T08:10:03Z | 27 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-08-24T08:09:30Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# amazon/sm-hackathon-actionability-9-multi-outputs-setfit-all-roberta-large-model-v0.1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("amazon/sm-hackathon-actionability-9-multi-outputs-setfit-all-roberta-large-model-v0.1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
aware-ai/wav2vec2-base-german | aware-ai | 2023-08-24T08:01:53Z | 104 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_10_0",
"generated_from_trainer",
"de",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-09-01T19:46:01Z | ---
language:
- de
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_10_0
- generated_from_trainer
model-index:
- name: wav2vec2-base-german
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-german
This model is a fine-tuned version of [wav2vec2-base-german](https://huggingface.co/wav2vec2-base-german) on the MOZILLA-FOUNDATION/COMMON_VOICE_10_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9302
- Wer: 0.7428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8427 | 1.0 | 451 | 1.0878 | 0.8091 |
| 0.722 | 2.0 | 902 | 0.9732 | 0.7593 |
| 0.6589 | 3.0 | 1353 | 0.9302 | 0.7428 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
juandalibaba/my_awesome_wnut_model | juandalibaba | 2023-08-24T07:56:48Z | 65 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-08-23T06:40:28Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: juandalibaba/my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juandalibaba/my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6376
- Validation Loss: 1.8223
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7876 | 1.9931 | 0 |
| 1.7614 | 1.8223 | 1 |
| 1.6376 | 1.8223 | 2 |
### Framework versions
- Transformers 4.32.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
amazon/sm-hackathon-actionability-9-multi-outputs-setfit-model-v0.1 | amazon | 2023-08-24T07:48:06Z | 24 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-08-24T07:28:22Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# amazon/sm-hackathon-actionability-9-multi-outputs-setfit-model-v0.1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("amazon/sm-hackathon-actionability-9-multi-outputs-setfit-model-v0.1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
ArneJa/Taxi | ArneJa | 2023-08-24T07:42:44Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-24T07:42:43Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ArneJa/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
avasaz/avasaz-large | avasaz | 2023-08-24T07:30:53Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"musicgen",
"text-to-audio",
"license:mit",
"region:us"
] | text-to-audio | 2023-08-23T19:46:30Z | ---
inference: false
tags:
- musicgen
license: mit
---
# Avasaz Large (3.3B) - Make music directly from your ideas
<p align="center">
<img src="https://huggingface.co/avasaz/avasaz-large/resolve/main/avasaz_logo.png" width=256 height=256 />
</p>
## What is Avasaz?
Avasaz (which is a combinations of Persian word ุขูุง meaning song and ุณุงุฒ meaning maker, literally translates to _song maker_) is a _state-of-the-art generative AI model_ which can help you turn your ideas to music in matter of a few minutes. This model has been developed by [Muhammadreza Haghiri](https://haghiri75.com/en) as an effort to make a suite of AI programs to make the world a much better place for our future generations.
## How can you use Avasaz?
[](https://colab.research.google.com/github/prp-e/avasaz/blob/main/Avasaz_Inference.ipynb)
Currently, Infrerence is only available on _Colab_. Codes will be here as soon as possible. |
neil-code/autotrain-summarization-84573142568 | neil-code | 2023-08-24T07:22:08Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:neil-code/autotrain-data-summarization",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2023-08-24T07:16:47Z | ---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- neil-code/autotrain-data-summarization
co2_eq_emissions:
emissions: 3.1909973371323623
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 84573142568
- CO2 Emissions (in grams): 3.1910
## Validation Metrics
- Loss: 1.445
- Rouge1: 33.737
- Rouge2: 11.210
- RougeL: 28.204
- RougeLsum: 30.262
- Gen Len: 18.836
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/neil-code/autotrain-summarization-84573142568
``` |
achmaddaa/ametv2 | achmaddaa | 2023-08-24T07:07:14Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T07:04:20Z | ---
license: creativeml-openrail-m
---
|
DineshK/dummy-model | DineshK | 2023-08-24T07:05:34Z | 59 | 0 | transformers | [
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-08-24T07:03:17Z | ---
license: mit
base_model: camembert-base
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.32.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
LarryAIDraw/meinaalter_v3 | LarryAIDraw | 2023-08-24T07:01:23Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T06:03:44Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/20945?modelVersionId=112825 |
ardt-multipart/ardt-multipart-arrl_sgld_train_walker2d_high-2408_0701-66 | ardt-multipart | 2023-08-24T06:56:49Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-24T06:03:00Z | ---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-arrl_sgld_train_walker2d_high-2408_0701-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-arrl_sgld_train_walker2d_high-2408_0701-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
greenyslimerfahrungen/greenyslimerfahrungen | greenyslimerfahrungen | 2023-08-24T06:45:50Z | 0 | 0 | espnet | [
"espnet",
"Greeny Slim Erfahrungen",
"en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-08-24T06:45:09Z | ---
license: cc-by-nc-sa-4.0
language:
- en
library_name: espnet
tags:
- Greeny Slim Erfahrungen
---
[Greeny Slim Erfahrungen](https://supplementtycoon.com/de/greeny-slim-fruchtgummis/) Notwithstanding, it's vital to take note of that despite the fact that they are low in carbs and sugar, they ought to in any case be consumed with some restraint as a feature of a fair diet.As forever, it's prescribed to peruse the nourishment marks and fixings list cautiously prior to buying any keto gummies to guarantee they line up with your dietary objectives and inclinations.
VISIT HERE FOR OFFICIAL WEBSITE:-https://supplementtycoon.com/de/greeny-slim-fruchtgummis/
|
k1101jh/q-Taxi-v3 | k1101jh | 2023-08-24T06:44:50Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-24T06:44:48Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="k1101jh/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
k1101jh/q-FrozenLake-v1-4x4-noSlippery | k1101jh | 2023-08-24T06:38:56Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-24T06:38:54Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="k1101jh/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dkimds/a2c-PandaReachDense-v3 | dkimds | 2023-08-24T06:17:56Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-24T06:12:25Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HGV1408/Data | HGV1408 | 2023-08-24T06:17:51Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-24T06:15:20Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6997 | 0.54 | 500 | 1.4834 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
IngeniousArtist/openllama-3b-finance | IngeniousArtist | 2023-08-24T05:36:30Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:finetune:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-23T20:47:22Z | ---
license: apache-2.0
base_model: openlm-research/open_llama_3b_v2
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- accuracy
model-index:
- name: openllama-3b-finance
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
config: sentences_50agree
split: train
args: sentences_50agree
metrics:
- name: Accuracy
type: accuracy
value: 0.4142561983471074
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openllama-3b-finance
This model is a fine-tuned version of [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0296
- Accuracy: 0.4143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 21.9655 | 0.01 | 20 | 8.1663 | 0.0816 |
| 2.231 | 0.01 | 40 | 6.3007 | 0.4143 |
| 2.7452 | 0.02 | 60 | 4.0892 | 0.4143 |
| 2.4561 | 0.02 | 80 | 5.0314 | 0.4143 |
| 2.337 | 0.03 | 100 | 5.6176 | 0.4143 |
| 3.2226 | 0.03 | 120 | 4.4963 | 0.4143 |
| 2.5633 | 0.04 | 140 | 6.1800 | 0.4143 |
| 2.4764 | 0.04 | 160 | 4.7059 | 0.4143 |
| 2.45 | 0.05 | 180 | 5.0602 | 0.4143 |
| 1.4232 | 0.05 | 200 | 5.3418 | 0.4143 |
| 2.7684 | 0.06 | 220 | 5.1805 | 0.4143 |
| 1.7065 | 0.06 | 240 | 4.7568 | 0.4143 |
| 2.3417 | 0.07 | 260 | 6.1062 | 0.4143 |
| 1.907 | 0.07 | 280 | 12.0988 | 0.5041 |
| 14.6043 | 0.08 | 300 | 3.0283 | 0.0816 |
| 1.337 | 0.08 | 320 | 12.7786 | 0.4143 |
| 4.182 | 0.09 | 340 | 7.5619 | 0.4143 |
| 3.7365 | 0.09 | 360 | 7.8581 | 0.4143 |
| 3.209 | 0.1 | 380 | 3.2547 | 0.4143 |
| 3.4836 | 0.1 | 400 | 89.8525 | 0.0816 |
| 4.5805 | 0.11 | 420 | 103.0762 | 0.4143 |
| 4.6351 | 0.11 | 440 | 91.4501 | 0.4143 |
| 11.0873 | 0.12 | 460 | 88.0469 | 0.4143 |
| 1.1274 | 0.12 | 480 | 86.7130 | 0.4143 |
| 2.0398 | 0.13 | 500 | 86.4186 | 0.4143 |
| 18.6924 | 0.13 | 520 | 80.1491 | 0.4143 |
| 1.2216 | 0.14 | 540 | 76.8429 | 0.4143 |
| 1.1179 | 0.14 | 560 | 78.0159 | 0.4143 |
| 10.0981 | 0.15 | 580 | 71.1114 | 0.4143 |
| 9.0123 | 0.15 | 600 | 66.2945 | 0.4143 |
| 1.9539 | 0.16 | 620 | 65.6854 | 0.4143 |
| 8.4729 | 0.17 | 640 | 62.1595 | 0.4143 |
| 7.816 | 0.17 | 660 | 52.0763 | 0.4143 |
| 6.0443 | 0.18 | 680 | 41.1500 | 0.4143 |
| 3.1804 | 0.18 | 700 | 42.8007 | 0.4143 |
| 1.6122 | 0.19 | 720 | 44.0976 | 0.4143 |
| 9.8927 | 0.19 | 740 | 31.6381 | 0.4143 |
| 6.828 | 0.2 | 760 | 12.7483 | 0.4143 |
| 3.1457 | 0.2 | 780 | 13.2981 | 0.4143 |
| 1.9991 | 0.21 | 800 | 12.4846 | 0.4143 |
| 2.5539 | 0.21 | 820 | 13.7669 | 0.4143 |
| 1.3898 | 0.22 | 840 | 12.8919 | 0.0816 |
| 2.9251 | 0.22 | 860 | 15.9149 | 0.0816 |
| 4.0874 | 0.23 | 880 | 10.5282 | 0.4143 |
| 2.4763 | 0.23 | 900 | 3.0281 | 0.4143 |
| 2.2865 | 0.24 | 920 | 12.2460 | 0.4143 |
| 4.2438 | 0.24 | 940 | 10.1961 | 0.4143 |
| 2.547 | 0.25 | 960 | 1.4099 | 0.4143 |
| 0.8659 | 0.25 | 980 | 8.3217 | 0.4143 |
| 3.5331 | 0.26 | 1000 | 6.3990 | 0.4143 |
| 2.4704 | 0.26 | 1020 | 2.2337 | 0.0816 |
| 2.1381 | 0.27 | 1040 | 10.6263 | 0.4143 |
| 1.5927 | 0.27 | 1060 | 11.1989 | 0.4143 |
| 2.485 | 0.28 | 1080 | 8.8174 | 0.4143 |
| 2.8074 | 0.28 | 1100 | 5.5971 | 0.4143 |
| 0.8622 | 0.29 | 1120 | 5.5089 | 0.4143 |
| 2.8085 | 0.29 | 1140 | 5.4300 | 0.4143 |
| 1.2405 | 0.3 | 1160 | 7.5657 | 0.4143 |
| 3.9374 | 0.3 | 1180 | 2.7180 | 0.4143 |
| 1.7494 | 0.31 | 1200 | 4.9639 | 0.0816 |
| 2.6094 | 0.32 | 1220 | 2.1980 | 0.4143 |
| 2.2072 | 0.32 | 1240 | 7.3392 | 0.4143 |
| 0.9978 | 0.33 | 1260 | 7.9127 | 0.4143 |
| 2.3872 | 0.33 | 1280 | 7.0613 | 0.4143 |
| 3.3129 | 0.34 | 1300 | 4.4202 | 0.4143 |
| 1.776 | 0.34 | 1320 | 6.1467 | 0.4143 |
| 3.1179 | 0.35 | 1340 | 6.0607 | 0.4143 |
| 1.272 | 0.35 | 1360 | 5.0484 | 0.4143 |
| 3.0694 | 0.36 | 1380 | 3.1665 | 0.4143 |
| 1.9452 | 0.36 | 1400 | 4.8692 | 0.4143 |
| 2.3689 | 0.37 | 1420 | 4.9375 | 0.4143 |
| 2.7082 | 0.37 | 1440 | 3.2108 | 0.4143 |
| 0.8244 | 0.38 | 1460 | 7.0151 | 0.4143 |
| 2.6032 | 0.38 | 1480 | 5.5645 | 0.4143 |
| 2.8745 | 0.39 | 1500 | 4.2408 | 0.4143 |
| 2.625 | 0.39 | 1520 | 6.8800 | 0.4143 |
| 2.5335 | 0.4 | 1540 | 6.3109 | 0.4143 |
| 2.5495 | 0.4 | 1560 | 4.4017 | 0.4143 |
| 1.7234 | 0.41 | 1580 | 5.1739 | 0.4143 |
| 2.1066 | 0.41 | 1600 | 6.0769 | 0.4143 |
| 2.5541 | 0.42 | 1620 | 3.7539 | 0.4143 |
| 2.4598 | 0.42 | 1640 | 4.2075 | 0.4143 |
| 1.7211 | 0.43 | 1660 | 5.3975 | 0.4143 |
| 2.3993 | 0.43 | 1680 | 4.1427 | 0.4143 |
| 1.6161 | 0.44 | 1700 | 5.0871 | 0.4143 |
| 2.2361 | 0.44 | 1720 | 4.3375 | 0.4143 |
| 2.0841 | 0.45 | 1740 | 4.7357 | 0.4143 |
| 2.137 | 0.45 | 1760 | 5.2737 | 0.4143 |
| 2.3819 | 0.46 | 1780 | 3.1688 | 0.4143 |
| 2.6391 | 0.46 | 1800 | 5.6169 | 0.4143 |
| 1.276 | 0.47 | 1820 | 6.1945 | 0.4143 |
| 2.0694 | 0.48 | 1840 | 6.3761 | 0.4143 |
| 2.3715 | 0.48 | 1860 | 6.1666 | 0.4143 |
| 2.1428 | 0.49 | 1880 | 6.4718 | 0.4143 |
| 2.0409 | 0.49 | 1900 | 6.3259 | 0.4143 |
| 2.1924 | 0.5 | 1920 | 6.0853 | 0.4143 |
| 2.3511 | 0.5 | 1940 | 4.7199 | 0.4143 |
| 2.7335 | 0.51 | 1960 | 4.3591 | 0.4143 |
| 1.6784 | 0.51 | 1980 | 3.7488 | 0.1612 |
| 1.5525 | 0.52 | 2000 | 6.0497 | 0.4143 |
| 2.7457 | 0.52 | 2020 | 3.5952 | 0.4143 |
| 2.3929 | 0.53 | 2040 | 4.7684 | 0.4143 |
| 1.9522 | 0.53 | 2060 | 5.6394 | 0.4143 |
| 2.2257 | 0.54 | 2080 | 4.5801 | 0.4143 |
| 1.6753 | 0.54 | 2100 | 5.0521 | 0.4143 |
| 1.6154 | 0.55 | 2120 | 5.4730 | 0.4143 |
| 1.7723 | 0.55 | 2140 | 5.5251 | 0.4143 |
| 2.6963 | 0.56 | 2160 | 3.5098 | 0.4143 |
| 1.7274 | 0.56 | 2180 | 5.4262 | 0.4143 |
| 2.4059 | 0.57 | 2200 | 4.5019 | 0.4143 |
| 1.6505 | 0.57 | 2220 | 5.1107 | 0.4143 |
| 1.2469 | 0.58 | 2240 | 5.3456 | 0.4143 |
| 1.6702 | 0.58 | 2260 | 5.4103 | 0.4143 |
| 1.615 | 0.59 | 2280 | 5.8024 | 0.4143 |
| 1.5622 | 0.59 | 2300 | 5.6035 | 0.4143 |
| 2.3536 | 0.6 | 2320 | 5.3779 | 0.4143 |
| 2.0512 | 0.6 | 2340 | 5.2498 | 0.4143 |
| 2.1405 | 0.61 | 2360 | 5.2279 | 0.4143 |
| 2.1926 | 0.61 | 2380 | 4.3260 | 0.4143 |
| 2.3995 | 0.62 | 2400 | 4.4445 | 0.4143 |
| 1.4944 | 0.62 | 2420 | 4.9616 | 0.4143 |
| 2.6623 | 0.63 | 2440 | 4.9736 | 0.4143 |
| 1.4095 | 0.64 | 2460 | 4.6506 | 0.4143 |
| 2.4803 | 0.64 | 2480 | 4.0971 | 0.4143 |
| 1.2721 | 0.65 | 2500 | 4.3192 | 0.4143 |
| 1.8372 | 0.65 | 2520 | 4.4907 | 0.4143 |
| 1.8942 | 0.66 | 2540 | 4.7323 | 0.4143 |
| 2.1407 | 0.66 | 2560 | 4.9554 | 0.4143 |
| 2.5039 | 0.67 | 2580 | 5.1599 | 0.4143 |
| 1.7321 | 0.67 | 2600 | 5.6089 | 0.4143 |
| 2.0621 | 0.68 | 2620 | 4.8359 | 0.4143 |
| 2.1664 | 0.68 | 2640 | 4.5581 | 0.4143 |
| 1.8835 | 0.69 | 2660 | 5.1029 | 0.4143 |
| 3.0314 | 0.69 | 2680 | 3.9587 | 0.4143 |
| 1.1781 | 0.7 | 2700 | 4.4584 | 0.4143 |
| 3.3222 | 0.7 | 2720 | 4.7628 | 0.4143 |
| 2.1184 | 0.71 | 2740 | 4.4039 | 0.4143 |
| 1.9293 | 0.71 | 2760 | 3.8755 | 0.4143 |
| 2.2448 | 0.72 | 2780 | 4.4327 | 0.4143 |
| 2.4697 | 0.72 | 2800 | 3.3026 | 0.4143 |
| 1.8569 | 0.73 | 2820 | 3.7722 | 0.4143 |
| 0.8544 | 0.73 | 2840 | 4.9176 | 0.4143 |
| 2.2445 | 0.74 | 2860 | 4.3889 | 0.4143 |
| 1.3723 | 0.74 | 2880 | 4.3280 | 0.4143 |
| 2.2167 | 0.75 | 2900 | 4.4016 | 0.4143 |
| 1.98 | 0.75 | 2920 | 3.8661 | 0.4143 |
| 1.7344 | 0.76 | 2940 | 3.7919 | 0.4143 |
| 1.924 | 0.76 | 2960 | 4.1408 | 0.4143 |
| 1.3811 | 0.77 | 2980 | 4.3730 | 0.4143 |
| 1.8289 | 0.77 | 3000 | 4.2872 | 0.4143 |
| 1.9573 | 0.78 | 3020 | 4.6165 | 0.4143 |
| 2.4877 | 0.78 | 3040 | 4.5988 | 0.4143 |
| 1.1749 | 0.79 | 3060 | 4.7887 | 0.4143 |
| 2.1835 | 0.8 | 3080 | 4.9018 | 0.4143 |
| 2.3752 | 0.8 | 3100 | 4.6911 | 0.4143 |
| 1.9741 | 0.81 | 3120 | 4.5126 | 0.4143 |
| 1.7513 | 0.81 | 3140 | 4.6251 | 0.4143 |
| 3.0666 | 0.82 | 3160 | 4.0260 | 0.4143 |
| 0.5569 | 0.82 | 3180 | 4.0965 | 0.4143 |
| 2.1805 | 0.83 | 3200 | 4.5240 | 0.4143 |
| 2.4319 | 0.83 | 3220 | 4.3080 | 0.4143 |
| 2.126 | 0.84 | 3240 | 3.7823 | 0.4143 |
| 1.6993 | 0.84 | 3260 | 3.8093 | 0.4143 |
| 0.6861 | 0.85 | 3280 | 4.1618 | 0.4143 |
| 0.748 | 0.85 | 3300 | 4.5653 | 0.4143 |
| 2.5721 | 0.86 | 3320 | 4.6628 | 0.4143 |
| 2.0137 | 0.86 | 3340 | 4.2796 | 0.4143 |
| 2.1864 | 0.87 | 3360 | 4.1173 | 0.4143 |
| 2.4881 | 0.87 | 3380 | 3.9617 | 0.4143 |
| 2.6837 | 0.88 | 3400 | 3.7575 | 0.4143 |
| 1.5951 | 0.88 | 3420 | 3.6086 | 0.4143 |
| 2.504 | 0.89 | 3440 | 3.5919 | 0.4143 |
| 1.4982 | 0.89 | 3460 | 3.7519 | 0.4143 |
| 1.8994 | 0.9 | 3480 | 3.7120 | 0.4143 |
| 1.6126 | 0.9 | 3500 | 3.6854 | 0.4143 |
| 2.002 | 0.91 | 3520 | 3.7888 | 0.4143 |
| 1.0264 | 0.91 | 3540 | 3.7990 | 0.4143 |
| 1.9495 | 0.92 | 3560 | 3.9635 | 0.4143 |
| 2.0742 | 0.92 | 3580 | 3.9651 | 0.4143 |
| 1.7803 | 0.93 | 3600 | 3.9518 | 0.4143 |
| 2.0843 | 0.93 | 3620 | 3.9404 | 0.4143 |
| 1.8431 | 0.94 | 3640 | 3.9334 | 0.4143 |
| 1.4987 | 0.95 | 3660 | 3.9609 | 0.4143 |
| 1.8214 | 0.95 | 3680 | 4.0060 | 0.4143 |
| 1.0964 | 0.96 | 3700 | 4.0422 | 0.4143 |
| 0.9669 | 0.96 | 3720 | 4.0549 | 0.4143 |
| 1.6226 | 0.97 | 3740 | 4.0486 | 0.4143 |
| 1.8061 | 0.97 | 3760 | 4.0405 | 0.4143 |
| 2.8738 | 0.98 | 3780 | 4.0317 | 0.4143 |
| 1.684 | 0.98 | 3800 | 4.0319 | 0.4143 |
| 1.1158 | 0.99 | 3820 | 4.0303 | 0.4143 |
| 1.775 | 0.99 | 3840 | 4.0294 | 0.4143 |
| 2.1639 | 1.0 | 3860 | 4.0296 | 0.4143 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Afbnff/B | Afbnff | 2023-08-24T05:29:13Z | 0 | 0 | null | [
"dataset:fka/awesome-chatgpt-prompts",
"region:us"
] | null | 2023-08-24T05:28:01Z | ---
datasets:
- fka/awesome-chatgpt-prompts
metrics:
- accuracy
--- |
ishvalin/mt5-small-finetuned-amazon-en-es | ishvalin | 2023-08-24T05:17:08Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-08-24T04:43:56Z | ---
license: apache-2.0
base_model: google/mt5-small
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0301
- Rouge1: 17.4531
- Rouge2: 9.0091
- Rougel: 17.0836
- Rougelsum: 17.1528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.6834 | 1.0 | 1209 | 3.2483 | 15.494 | 7.8022 | 15.1402 | 15.2041 |
| 3.6689 | 2.0 | 2418 | 3.1014 | 16.6941 | 8.9493 | 15.9414 | 16.1157 |
| 3.4493 | 3.0 | 3627 | 3.0640 | 16.5731 | 8.2808 | 16.0156 | 16.1514 |
| 3.3175 | 4.0 | 4836 | 3.0375 | 16.8245 | 8.6021 | 16.2052 | 16.3956 |
| 3.2303 | 5.0 | 6045 | 3.0312 | 17.8902 | 9.7012 | 17.3184 | 17.5092 |
| 3.1693 | 6.0 | 7254 | 3.0255 | 16.985 | 8.7225 | 16.6058 | 16.7549 |
| 3.1357 | 7.0 | 8463 | 3.0235 | 17.015 | 9.0093 | 16.7306 | 16.9061 |
| 3.1073 | 8.0 | 9672 | 3.0301 | 17.4531 | 9.0091 | 17.0836 | 17.1528 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
timetoai/distilbert-base-uncased-arxiv-abstracts-10k | timetoai | 2023-08-24T05:08:14Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-08-21T04:38:43Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-arxiv-abstracts-10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-arxiv-abstracts-10k
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 166 | 2.2911 |
| No log | 2.0 | 332 | 2.1673 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
JpChi/pegasus-samsum | JpChi | 2023-08-24T05:07:22Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-24T04:07:05Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.08 | 0.27 | 500 | 1.5162 |
| 1.6341 | 0.54 | 1000 | 1.4381 |
| 1.5749 | 0.81 | 1500 | 1.4079 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
openbmb/UltraLM-65b | openbmb | 2023-08-24T04:58:51Z | 1,565 | 8 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:stingning/ultrachat",
"arxiv:2305.14233",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-18T09:33:47Z | ---
datasets:
- stingning/ultrachat
---
# UltraLM-65b
<!-- Provide a quick summary of what the model is/does. -->
This is UltraLM-65b delta weights, a chat language model trained upon [UltraChat](https://github.com/thunlp/UltraChat)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
The model is fine-tuned based on LLaMA-65b with a multi-turn chat-format template as below
```
User: instruction 1
Assistant: response 1<eos_token>
User: instruction 2
Assistant: response 2<eos_token>
...
```
- **License:** UltraLM is based on LLaMA and should be used under LLaMA's [model license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
- **Finetuned from model:** LLaMA-65b
- **Finetuned on data:** [UltraChat](https://github.com/thunlp/UltraChat)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [UltraChat](https://github.com/thunlp/UltraChat)
- **Paper:** [arxiv](https://arxiv.org/abs/2305.14233)
- **Demo:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
To use this model, you need to [recover](https://github.com/thunlp/UltraChat/tree/main/UltraLM) the full model from the delta weights and perform inference following the template below:
```
[Optional]User: system prompt
User: user input
Assistant:
``` |
neil-code/autotrain-test-summarization-84415142559 | neil-code | 2023-08-24T04:28:12Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:neil-code/autotrain-data-test-summarization",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2023-08-24T04:23:26Z | ---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- neil-code/autotrain-data-test-summarization
co2_eq_emissions:
emissions: 3.0878646296058494
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 84415142559
- CO2 Emissions (in grams): 3.0879
## Validation Metrics
- Loss: 1.534
- Rouge1: 33.336
- Rouge2: 11.361
- RougeL: 27.779
- RougeLsum: 29.966
- Gen Len: 18.773
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/neil-code/autotrain-test-summarization-84415142559
``` |
larryvrh/tigerbot-13b-chat-sharegpt-lora | larryvrh | 2023-08-24T04:27:43Z | 0 | 1 | null | [
"text-generation",
"zh",
"dataset:larryvrh/sharegpt_zh-only",
"region:us"
] | text-generation | 2023-08-24T02:22:02Z | ---
datasets:
- larryvrh/sharegpt_zh-only
language:
- zh
pipeline_tag: text-generation
---
ไฝฟ็จ8631ๆกไธญๆsharegpt่ฏญๆ[larryvrh/sharegpt_zh-only](https://huggingface.co/datasets/larryvrh/sharegpt_zh-only)้ๆฐๅฏน้ฝๅ็[TigerResearch/tigerbot-13b-chat](https://huggingface.co/TigerResearch/tigerbot-13b-chat)ใ
ๆนๅไบๆจกๅๅค่ฝฎๅฏน่ฏไธ็ไธไธๆๅ
ณ่่ฝๅใ
ไปฅๅๅจ้จๅๅบๆฏไธๅ็ญ่ฟไบ"ๆไบบ"็ๆ
ๅตใ
ๅพฎ่ฐๅ๏ผ

ๅพฎ่ฐๅ๏ผ

ๅฏไปฅไฝฟ็จ้
ๅฅ็[webui](https://huggingface.co/larryvrh/tigerbot-13b-chat-sharegpt-lora/blob/main/chat_webui.py)ๆฅ่ฟ่กๅฟซ้ๆต่ฏใ
 |
antoinerossupedu/bert-playground | antoinerossupedu | 2023-08-24T04:25:14Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-24T04:10:04Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-playground
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.606823117358914
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-playground
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8178
- Matthews Correlation: 0.6068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4401 | 1.0 | 1069 | 0.4155 | 0.5720 |
| 0.3121 | 2.0 | 2138 | 0.6457 | 0.6039 |
| 0.1764 | 3.0 | 3207 | 0.8178 | 0.6068 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
TariqJamil/llama-7b-minigunaco-0805 | TariqJamil | 2023-08-24T03:58:37Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T16:48:35Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
xszhou/ppo-LunarLander-v2 | xszhou | 2023-08-24T03:44:40Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-24T03:44:16Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.49 +/- 17.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dkqjrm/20230824104542 | dkqjrm | 2023-08-24T03:41:11Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-24T01:46:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824104542'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824104542
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3421
- Accuracy: 0.7256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 1.0891 | 0.5307 |
| 0.5902 | 2.0 | 624 | 0.6221 | 0.4765 |
| 0.5902 | 3.0 | 936 | 0.4801 | 0.5379 |
| 0.5511 | 4.0 | 1248 | 0.4461 | 0.5054 |
| 0.5299 | 5.0 | 1560 | 0.5922 | 0.5162 |
| 0.5299 | 6.0 | 1872 | 0.4113 | 0.5199 |
| 0.509 | 7.0 | 2184 | 0.4885 | 0.5451 |
| 0.509 | 8.0 | 2496 | 0.4106 | 0.4910 |
| 0.4976 | 9.0 | 2808 | 0.5019 | 0.4874 |
| 0.4898 | 10.0 | 3120 | 0.4132 | 0.5307 |
| 0.4898 | 11.0 | 3432 | 0.4564 | 0.4874 |
| 0.4739 | 12.0 | 3744 | 0.4919 | 0.5307 |
| 0.4594 | 13.0 | 4056 | 0.4235 | 0.4982 |
| 0.4594 | 14.0 | 4368 | 0.3937 | 0.5812 |
| 0.4444 | 15.0 | 4680 | 0.3871 | 0.5812 |
| 0.4444 | 16.0 | 4992 | 0.4123 | 0.6065 |
| 0.4334 | 17.0 | 5304 | 0.3986 | 0.6209 |
| 0.4045 | 18.0 | 5616 | 0.4088 | 0.6029 |
| 0.4045 | 19.0 | 5928 | 0.3935 | 0.6209 |
| 0.3999 | 20.0 | 6240 | 0.3645 | 0.6715 |
| 0.376 | 21.0 | 6552 | 0.4230 | 0.5740 |
| 0.376 | 22.0 | 6864 | 0.3911 | 0.6823 |
| 0.3683 | 23.0 | 7176 | 0.5057 | 0.6534 |
| 0.3683 | 24.0 | 7488 | 0.3273 | 0.7040 |
| 0.3501 | 25.0 | 7800 | 0.3663 | 0.7004 |
| 0.344 | 26.0 | 8112 | 0.3755 | 0.6931 |
| 0.344 | 27.0 | 8424 | 0.3648 | 0.7112 |
| 0.3354 | 28.0 | 8736 | 0.3359 | 0.7148 |
| 0.3288 | 29.0 | 9048 | 0.3362 | 0.7112 |
| 0.3288 | 30.0 | 9360 | 0.5539 | 0.6787 |
| 0.3199 | 31.0 | 9672 | 0.3617 | 0.7112 |
| 0.3199 | 32.0 | 9984 | 0.3601 | 0.7184 |
| 0.3166 | 33.0 | 10296 | 0.3325 | 0.7292 |
| 0.3037 | 34.0 | 10608 | 0.3274 | 0.7256 |
| 0.3037 | 35.0 | 10920 | 0.3412 | 0.7076 |
| 0.2987 | 36.0 | 11232 | 0.3509 | 0.7256 |
| 0.2842 | 37.0 | 11544 | 0.3945 | 0.7076 |
| 0.2842 | 38.0 | 11856 | 0.3224 | 0.7365 |
| 0.2894 | 39.0 | 12168 | 0.4010 | 0.7148 |
| 0.2894 | 40.0 | 12480 | 0.3472 | 0.7220 |
| 0.2764 | 41.0 | 12792 | 0.3364 | 0.7112 |
| 0.2708 | 42.0 | 13104 | 0.3379 | 0.7040 |
| 0.2708 | 43.0 | 13416 | 0.3625 | 0.7148 |
| 0.2665 | 44.0 | 13728 | 0.3435 | 0.7220 |
| 0.265 | 45.0 | 14040 | 0.3762 | 0.7292 |
| 0.265 | 46.0 | 14352 | 0.3322 | 0.7220 |
| 0.2618 | 47.0 | 14664 | 0.3265 | 0.7329 |
| 0.2618 | 48.0 | 14976 | 0.3752 | 0.7292 |
| 0.2513 | 49.0 | 15288 | 0.3415 | 0.7292 |
| 0.2487 | 50.0 | 15600 | 0.3604 | 0.7220 |
| 0.2487 | 51.0 | 15912 | 0.3484 | 0.7292 |
| 0.2488 | 52.0 | 16224 | 0.3598 | 0.7329 |
| 0.2404 | 53.0 | 16536 | 0.3719 | 0.7184 |
| 0.2404 | 54.0 | 16848 | 0.3329 | 0.7220 |
| 0.2359 | 55.0 | 17160 | 0.3535 | 0.7220 |
| 0.2359 | 56.0 | 17472 | 0.3606 | 0.7256 |
| 0.2364 | 57.0 | 17784 | 0.3407 | 0.7292 |
| 0.2343 | 58.0 | 18096 | 0.3342 | 0.7292 |
| 0.2343 | 59.0 | 18408 | 0.3451 | 0.7220 |
| 0.2348 | 60.0 | 18720 | 0.3421 | 0.7256 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chezuro/pm-fine-tuned | chezuro | 2023-08-24T03:22:56Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-24T00:55:03Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
ChillyMango/results | ChillyMango | 2023-08-24T03:16:01Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:finetune:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2023-08-24T00:42:24Z | ---
base_model: NousResearch/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ohicarip/sd-deepfashion-baseline-model | ohicarip | 2023-08-24T02:45:46Z | 4 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dataset:ohicarip/deepfashion_bl2",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-20T19:40:51Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
datasets:
- ohicarip/deepfashion_bl2
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - ohicarip/sd-deepfashion-baseline-model
This pipeline was finetuned from **CompVis/stable-diffusion-v1-4** on the **ohicarip/deepfashion_bl2** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['This man wears a long-sleeve sweater with pure color patterns. The sweater is with cotton fabric. It has a round neckline. The pants this man wears is of long length. The pants are with denim fabric and solid color patterns. The outer clothing the gentleman wears is with cotton fabric and solid color patterns. There is an accessory on his wrist.', 'This person is wearing a short-sleeve shirt with pure color patterns. The shirt is with cotton fabric. It has a round neckline. This person wears a long trousers. The trousers are with denim fabric and lattice patterns.', 'This guy is wearing a short-sleeve shirt with solid color patterns and a long pants. The shirt is with cotton fabric and its neckline is crew. The pants are with denim fabric and solid color patterns.', 'This female is wearing a tank tank shirt with plaid patterns and a three-point shorts. The tank shirt is with cotton fabric. The neckline of the tank shirt is crew. The shorts are with cotton fabric and plaid patterns. This lady wears socks in shoes.']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("ohicarip/sd-deepfashion-baseline-model", torch_dtype=torch.float16)
prompt = "This man wears a long-sleeve sweater with pure color patterns. The sweater is with cotton fabric. It has a round neckline. The pants this man wears is of long length. The pants are with denim fabric and solid color patterns. The outer clothing the gentleman wears is with cotton fabric and solid color patterns. There is an accessory on his wrist."
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 15
* Learning rate: 1e-05
* Batch size: 8
* Gradient accumulation steps: 4
* Image resolution: 512
* Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/ohicarip/text2image-fine-tune/runs/6en1otkv).
|
Timucin/q-Taxi | Timucin | 2023-08-24T02:44:53Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-24T02:44:51Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Timucin/q-Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mcwei/rvinpaint | mcwei | 2023-08-24T02:39:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T00:41:06Z | ---
license: creativeml-openrail-m
---
|
Timucin/q-FrozenLake-v1-4x4-noSlippery | Timucin | 2023-08-24T02:38:11Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-24T02:38:09Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Timucin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AndreaHuang97/MarkupLM | AndreaHuang97 | 2023-08-24T02:32:52Z | 49 | 0 | transformers | [
"transformers",
"pytorch",
"markuplm",
"text2text-generation",
"en",
"arxiv:2110.08518",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-24T01:40:47Z | ---
language:
- en
pipeline_tag: text2text-generation
---
# MarkupLM
**Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
## Introduction
MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei, ACL 2022
## Usage
We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/markuplm) and [demo notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/MarkupLM). |
LarryAIDraw/Lucy-08 | LarryAIDraw | 2023-08-24T02:23:59Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:06:40Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/132939/lucy-seiland-trails-of-cold-steel-4-sen-no-kiseki-4 |
LarryAIDraw/Aurier-10 | LarryAIDraw | 2023-08-24T02:23:33Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:07:09Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/132943/aurier-vander-trails-of-cold-steel-3-sen-no-kiseki-3 |
LarryAIDraw/Kuroe_Casual_wear_-V1 | LarryAIDraw | 2023-08-24T02:22:49Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:04:29Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/133037/redivekuroe-princess-connect-redive |
LarryAIDraw/shizuku_yaegashi_v1 | LarryAIDraw | 2023-08-24T02:21:24Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:05:51Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/132963/shizuku-yaegashi-or-arifureta-from-commonplace-to-worlds-strongest |
LarryAIDraw/MiyuCind-06 | LarryAIDraw | 2023-08-24T02:20:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:05:29Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/132955/miyu-mifune-idolmaster |
LarryAIDraw/Fuwawa_Abyssgard-10 | LarryAIDraw | 2023-08-24T02:20:14Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:05:02Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/117233/fuwawa-abyssgard-hololive-en-lora |
LarryAIDraw/Atago_and_Takao_20230820183759-000014 | LarryAIDraw | 2023-08-24T02:19:06Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:03:56Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/133344/atago-and-tako-lora |
LarryAIDraw/shimanto | LarryAIDraw | 2023-08-24T02:18:34Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:03:25Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/133172/ijn-shimanto-or-azur-lane |
LarryAIDraw/Mary | LarryAIDraw | 2023-08-24T02:17:50Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:02:59Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/133210/mary-the-eminence-in-shadow |
LarryAIDraw/ChristinaHope | LarryAIDraw | 2023-08-24T02:17:07Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-24T02:02:16Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/133295/christina-hope-the-eminence-in-shadow |
lianlian123/Reinforce-CartPole8 | lianlian123 | 2023-08-24T02:14:00Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-23T08:21:31Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mmenendezg/xlm-roberta-base-finetuned-panx-de | mmenendezg | 2023-08-24T02:08:36Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-08-22T23:21:35Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.6378279372946183
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0742
- F1: 0.6378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2424 | 1.0 | 525 | 0.2543 | 0.0 |
| 0.1994 | 2.0 | 1050 | 0.0977 | 0.5081 |
| 0.1011 | 3.0 | 1575 | 0.0742 | 0.6378 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
vicssl/test-trainer | vicssl | 2023-08-24T02:04:54Z | 0 | 0 | null | [
"sentence-similarity",
"region:us"
] | sentence-similarity | 2023-08-23T08:47:52Z | ---
pipeline_tag: sentence-similarity
--- |
ardt-multipart/ardt-multipart-arrl_train_walker2d_high-2408_0127-33 | ardt-multipart | 2023-08-24T02:03:02Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-24T00:28:40Z | ---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-arrl_train_walker2d_high-2408_0127-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-arrl_train_walker2d_high-2408_0127-33
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
JJinBBangMan/marian-finetuned-kde4-en-to-fr | JJinBBangMan | 2023-08-24T02:00:41Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-08-24T00:10:39Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.853174528380514
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8568
- Bleu: 52.8532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
cooperic/distilbert-base-uncased-finetuned-emotion | cooperic | 2023-08-24T01:49:06Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-24T00:31:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9283528881025964
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2174
- Accuracy: 0.9285
- F1: 0.9284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8012 | 1.0 | 250 | 0.3094 | 0.9095 | 0.9083 |
| 0.2454 | 2.0 | 500 | 0.2174 | 0.9285 | 0.9284 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dkqjrm/20230824083011 | dkqjrm | 2023-08-24T01:45:30Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-23T23:30:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824083011'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824083011
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3090
- Accuracy: 0.7401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7501 | 1.0 | 623 | 0.9859 | 0.4729 |
| 0.6252 | 2.0 | 1246 | 0.4891 | 0.4801 |
| 0.5769 | 3.0 | 1869 | 1.1271 | 0.4729 |
| 0.5672 | 4.0 | 2492 | 0.4257 | 0.5632 |
| 0.5439 | 5.0 | 3115 | 0.5883 | 0.5415 |
| 0.5426 | 6.0 | 3738 | 0.3734 | 0.6245 |
| 0.61 | 7.0 | 4361 | 0.4410 | 0.5848 |
| 0.4937 | 8.0 | 4984 | 0.4091 | 0.5632 |
| 0.4293 | 9.0 | 5607 | 0.3712 | 0.6282 |
| 0.3897 | 10.0 | 6230 | 0.3441 | 0.6931 |
| 0.3759 | 11.0 | 6853 | 0.3400 | 0.7004 |
| 0.379 | 12.0 | 7476 | 0.3802 | 0.6787 |
| 0.3661 | 13.0 | 8099 | 0.3456 | 0.7184 |
| 0.374 | 14.0 | 8722 | 0.3545 | 0.6859 |
| 0.3441 | 15.0 | 9345 | 0.3219 | 0.7112 |
| 0.3339 | 16.0 | 9968 | 0.3192 | 0.7184 |
| 0.3324 | 17.0 | 10591 | 0.3290 | 0.7184 |
| 0.324 | 18.0 | 11214 | 0.3284 | 0.7112 |
| 0.3641 | 19.0 | 11837 | 0.3100 | 0.7292 |
| 0.3138 | 20.0 | 12460 | 0.3102 | 0.7365 |
| 0.3099 | 21.0 | 13083 | 0.3887 | 0.7076 |
| 0.3095 | 22.0 | 13706 | 0.3443 | 0.7004 |
| 0.3039 | 23.0 | 14329 | 0.3937 | 0.6895 |
| 0.287 | 24.0 | 14952 | 0.3071 | 0.7473 |
| 0.2718 | 25.0 | 15575 | 0.3097 | 0.7184 |
| 0.2711 | 26.0 | 16198 | 0.2888 | 0.7329 |
| 0.2738 | 27.0 | 16821 | 0.2920 | 0.7220 |
| 0.2697 | 28.0 | 17444 | 0.2986 | 0.7329 |
| 0.2589 | 29.0 | 18067 | 0.3092 | 0.7437 |
| 0.2536 | 30.0 | 18690 | 0.3141 | 0.7292 |
| 0.2564 | 31.0 | 19313 | 0.3134 | 0.7401 |
| 0.2493 | 32.0 | 19936 | 0.2962 | 0.7365 |
| 0.2428 | 33.0 | 20559 | 0.3358 | 0.7256 |
| 0.2425 | 34.0 | 21182 | 0.3155 | 0.7148 |
| 0.2342 | 35.0 | 21805 | 0.3000 | 0.7220 |
| 0.2394 | 36.0 | 22428 | 0.2955 | 0.7329 |
| 0.2257 | 37.0 | 23051 | 0.3070 | 0.7509 |
| 0.2272 | 38.0 | 23674 | 0.2959 | 0.7365 |
| 0.2197 | 39.0 | 24297 | 0.3100 | 0.7401 |
| 0.2144 | 40.0 | 24920 | 0.3009 | 0.7365 |
| 0.2164 | 41.0 | 25543 | 0.2957 | 0.7256 |
| 0.2129 | 42.0 | 26166 | 0.3133 | 0.7292 |
| 0.2106 | 43.0 | 26789 | 0.3110 | 0.7329 |
| 0.2069 | 44.0 | 27412 | 0.3072 | 0.7329 |
| 0.2051 | 45.0 | 28035 | 0.3300 | 0.7292 |
| 0.2064 | 46.0 | 28658 | 0.3106 | 0.7256 |
| 0.2039 | 47.0 | 29281 | 0.3114 | 0.7292 |
| 0.2106 | 48.0 | 29904 | 0.3180 | 0.7365 |
| 0.2008 | 49.0 | 30527 | 0.3099 | 0.7329 |
| 0.1945 | 50.0 | 31150 | 0.3066 | 0.7329 |
| 0.1958 | 51.0 | 31773 | 0.3124 | 0.7401 |
| 0.1939 | 52.0 | 32396 | 0.3230 | 0.7401 |
| 0.1942 | 53.0 | 33019 | 0.3105 | 0.7365 |
| 0.1887 | 54.0 | 33642 | 0.3014 | 0.7256 |
| 0.185 | 55.0 | 34265 | 0.3052 | 0.7365 |
| 0.1868 | 56.0 | 34888 | 0.3155 | 0.7365 |
| 0.1888 | 57.0 | 35511 | 0.3056 | 0.7256 |
| 0.1885 | 58.0 | 36134 | 0.3069 | 0.7329 |
| 0.192 | 59.0 | 36757 | 0.3076 | 0.7329 |
| 0.1807 | 60.0 | 37380 | 0.3090 | 0.7401 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Ramoslee/whisper-small-th_10000 | Ramoslee | 2023-08-24T01:41:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"th",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:Ramoslee/Whishper-small-th",
"base_model:finetune:Ramoslee/Whishper-small-th",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-23T13:02:11Z | ---
language:
- th
license: apache-2.0
base_model: Ramoslee/Whishper-small-th
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Thai
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 th
type: mozilla-foundation/common_voice_11_0
config: th
split: test
args: th
metrics:
- name: Wer
type: wer
value: 18.87614018843608
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Thai
This model is a fine-tuned version of [Ramoslee/Whishper-small-th](https://huggingface.co/Ramoslee/Whishper-small-th) on the mozilla-foundation/common_voice_11_0 th dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1836
- Wer: 18.8761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.1262 | 0.27 | 1000 | 0.2267 | 25.1536 |
| 0.1174 | 0.55 | 2000 | 0.2190 | 24.6093 |
| 0.1363 | 0.82 | 3000 | 0.2059 | 24.5492 |
| 0.0618 | 1.1 | 4000 | 0.1970 | 22.1944 |
| 0.0686 | 1.37 | 5000 | 0.1916 | 21.2372 |
| 0.0722 | 1.65 | 6000 | 0.1854 | 20.3488 |
| 0.0771 | 1.92 | 7000 | 0.1801 | 19.8033 |
| 0.0191 | 2.2 | 8000 | 0.1859 | 19.5656 |
| 0.0237 | 2.47 | 9000 | 0.1862 | 19.1376 |
| 0.0205 | 2.74 | 10000 | 0.1836 | 18.8761 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.14.4.dev0
- Tokenizers 0.12.1
|
dkqjrm/20230824083855 | dkqjrm | 2023-08-24T01:40:49Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-23T23:39:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824083855'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824083855
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0821
- Accuracy: 0.7473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5366 | 1.0 | 623 | 0.8415 | 0.4729 |
| 0.3757 | 2.0 | 1246 | 0.3098 | 0.4693 |
| 0.3001 | 3.0 | 1869 | 0.5999 | 0.4729 |
| 0.3227 | 4.0 | 2492 | 0.2808 | 0.4729 |
| 0.3109 | 5.0 | 3115 | 0.2772 | 0.5487 |
| 0.3034 | 6.0 | 3738 | 0.1529 | 0.6029 |
| 0.2648 | 7.0 | 4361 | 0.1565 | 0.6029 |
| 0.2104 | 8.0 | 4984 | 0.1394 | 0.6245 |
| 0.1926 | 9.0 | 5607 | 0.1404 | 0.6390 |
| 0.175 | 10.0 | 6230 | 0.1292 | 0.6859 |
| 0.1634 | 11.0 | 6853 | 0.1174 | 0.7004 |
| 0.1618 | 12.0 | 7476 | 0.1228 | 0.6787 |
| 0.1555 | 13.0 | 8099 | 0.1287 | 0.6534 |
| 0.1534 | 14.0 | 8722 | 0.1461 | 0.6570 |
| 0.1523 | 15.0 | 9345 | 0.1356 | 0.6426 |
| 0.1448 | 16.0 | 9968 | 0.1065 | 0.6968 |
| 0.1402 | 17.0 | 10591 | 0.1011 | 0.7292 |
| 0.1342 | 18.0 | 11214 | 0.1112 | 0.6643 |
| 0.1388 | 19.0 | 11837 | 0.1255 | 0.6823 |
| 0.1281 | 20.0 | 12460 | 0.0965 | 0.7220 |
| 0.128 | 21.0 | 13083 | 0.0985 | 0.7040 |
| 0.1236 | 22.0 | 13706 | 0.1339 | 0.7040 |
| 0.1267 | 23.0 | 14329 | 0.1238 | 0.7365 |
| 0.1186 | 24.0 | 14952 | 0.0942 | 0.7292 |
| 0.1101 | 25.0 | 15575 | 0.0923 | 0.7220 |
| 0.1122 | 26.0 | 16198 | 0.0919 | 0.7401 |
| 0.1088 | 27.0 | 16821 | 0.0893 | 0.7292 |
| 0.1059 | 28.0 | 17444 | 0.0897 | 0.7401 |
| 0.106 | 29.0 | 18067 | 0.0878 | 0.7509 |
| 0.1019 | 30.0 | 18690 | 0.0945 | 0.7365 |
| 0.1047 | 31.0 | 19313 | 0.0900 | 0.7256 |
| 0.1011 | 32.0 | 19936 | 0.0884 | 0.7437 |
| 0.0962 | 33.0 | 20559 | 0.0874 | 0.7329 |
| 0.0971 | 34.0 | 21182 | 0.0933 | 0.7329 |
| 0.0914 | 35.0 | 21805 | 0.0845 | 0.7473 |
| 0.0965 | 36.0 | 22428 | 0.0914 | 0.7365 |
| 0.0914 | 37.0 | 23051 | 0.0855 | 0.7292 |
| 0.0894 | 38.0 | 23674 | 0.0867 | 0.7256 |
| 0.087 | 39.0 | 24297 | 0.0861 | 0.7329 |
| 0.0865 | 40.0 | 24920 | 0.0830 | 0.7329 |
| 0.0851 | 41.0 | 25543 | 0.0827 | 0.7473 |
| 0.0837 | 42.0 | 26166 | 0.0818 | 0.7365 |
| 0.0865 | 43.0 | 26789 | 0.0840 | 0.7401 |
| 0.0807 | 44.0 | 27412 | 0.0815 | 0.7292 |
| 0.0829 | 45.0 | 28035 | 0.0840 | 0.7365 |
| 0.0814 | 46.0 | 28658 | 0.0851 | 0.7401 |
| 0.0798 | 47.0 | 29281 | 0.0841 | 0.7401 |
| 0.0806 | 48.0 | 29904 | 0.0838 | 0.7473 |
| 0.0773 | 49.0 | 30527 | 0.0823 | 0.7401 |
| 0.0769 | 50.0 | 31150 | 0.0813 | 0.7329 |
| 0.0763 | 51.0 | 31773 | 0.0822 | 0.7509 |
| 0.0792 | 52.0 | 32396 | 0.0833 | 0.7365 |
| 0.0772 | 53.0 | 33019 | 0.0819 | 0.7365 |
| 0.0732 | 54.0 | 33642 | 0.0810 | 0.7365 |
| 0.0708 | 55.0 | 34265 | 0.0808 | 0.7365 |
| 0.0741 | 56.0 | 34888 | 0.0824 | 0.7509 |
| 0.0725 | 57.0 | 35511 | 0.0816 | 0.7437 |
| 0.072 | 58.0 | 36134 | 0.0812 | 0.7437 |
| 0.0712 | 59.0 | 36757 | 0.0827 | 0.7401 |
| 0.0707 | 60.0 | 37380 | 0.0821 | 0.7473 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230824084116 | dkqjrm | 2023-08-24T01:39:38Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-23T23:41:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824084116'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824084116
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6747
- Accuracy: 0.7329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0144 | 1.0 | 623 | 1.2485 | 0.4729 |
| 0.8551 | 2.0 | 1246 | 0.7296 | 0.5415 |
| 0.9621 | 3.0 | 1869 | 1.3927 | 0.4729 |
| 0.8648 | 4.0 | 2492 | 0.6253 | 0.6173 |
| 0.8311 | 5.0 | 3115 | 0.6509 | 0.6606 |
| 0.8365 | 6.0 | 3738 | 0.6018 | 0.6895 |
| 0.772 | 7.0 | 4361 | 0.7314 | 0.6751 |
| 0.7306 | 8.0 | 4984 | 1.0930 | 0.5957 |
| 0.763 | 9.0 | 5607 | 0.7093 | 0.7076 |
| 0.6931 | 10.0 | 6230 | 0.6302 | 0.6968 |
| 0.6465 | 11.0 | 6853 | 1.1188 | 0.5776 |
| 0.6503 | 12.0 | 7476 | 0.6957 | 0.7112 |
| 0.6657 | 13.0 | 8099 | 0.6470 | 0.7112 |
| 0.6315 | 14.0 | 8722 | 0.7099 | 0.7112 |
| 0.5491 | 15.0 | 9345 | 0.5178 | 0.7184 |
| 0.4908 | 16.0 | 9968 | 0.6282 | 0.7365 |
| 0.4742 | 17.0 | 10591 | 0.6553 | 0.7256 |
| 0.4653 | 18.0 | 11214 | 0.5637 | 0.7112 |
| 0.492 | 19.0 | 11837 | 0.5870 | 0.7184 |
| 0.4519 | 20.0 | 12460 | 0.8201 | 0.7292 |
| 0.4198 | 21.0 | 13083 | 0.6294 | 0.7365 |
| 0.403 | 22.0 | 13706 | 0.6998 | 0.7220 |
| 0.4017 | 23.0 | 14329 | 0.8424 | 0.7220 |
| 0.368 | 24.0 | 14952 | 0.6179 | 0.7401 |
| 0.3514 | 25.0 | 15575 | 0.6303 | 0.7256 |
| 0.3458 | 26.0 | 16198 | 0.6241 | 0.7292 |
| 0.3488 | 27.0 | 16821 | 0.6348 | 0.7365 |
| 0.33 | 28.0 | 17444 | 0.6663 | 0.7292 |
| 0.3133 | 29.0 | 18067 | 0.6231 | 0.7437 |
| 0.3108 | 30.0 | 18690 | 0.6940 | 0.7220 |
| 0.3156 | 31.0 | 19313 | 0.7685 | 0.7256 |
| 0.2887 | 32.0 | 19936 | 0.5912 | 0.7365 |
| 0.2871 | 33.0 | 20559 | 0.6539 | 0.7401 |
| 0.2835 | 34.0 | 21182 | 0.7319 | 0.7292 |
| 0.2587 | 35.0 | 21805 | 0.6106 | 0.7365 |
| 0.2767 | 36.0 | 22428 | 0.6255 | 0.7329 |
| 0.2621 | 37.0 | 23051 | 0.7181 | 0.7329 |
| 0.2733 | 38.0 | 23674 | 0.6841 | 0.7365 |
| 0.2473 | 39.0 | 24297 | 0.7042 | 0.7329 |
| 0.2467 | 40.0 | 24920 | 0.6123 | 0.7329 |
| 0.2357 | 41.0 | 25543 | 0.6681 | 0.7365 |
| 0.2333 | 42.0 | 26166 | 0.7094 | 0.7292 |
| 0.2387 | 43.0 | 26789 | 0.6546 | 0.7365 |
| 0.2248 | 44.0 | 27412 | 0.7021 | 0.7329 |
| 0.2271 | 45.0 | 28035 | 0.6913 | 0.7545 |
| 0.2288 | 46.0 | 28658 | 0.6855 | 0.7365 |
| 0.2159 | 47.0 | 29281 | 0.6495 | 0.7401 |
| 0.2107 | 48.0 | 29904 | 0.6568 | 0.7292 |
| 0.2204 | 49.0 | 30527 | 0.7337 | 0.7329 |
| 0.2038 | 50.0 | 31150 | 0.6391 | 0.7365 |
| 0.2183 | 51.0 | 31773 | 0.6593 | 0.7437 |
| 0.2041 | 52.0 | 32396 | 0.6518 | 0.7220 |
| 0.2107 | 53.0 | 33019 | 0.6677 | 0.7256 |
| 0.2076 | 54.0 | 33642 | 0.6716 | 0.7292 |
| 0.1946 | 55.0 | 34265 | 0.6957 | 0.7256 |
| 0.1974 | 56.0 | 34888 | 0.6858 | 0.7256 |
| 0.2047 | 57.0 | 35511 | 0.6721 | 0.7329 |
| 0.2001 | 58.0 | 36134 | 0.6747 | 0.7365 |
| 0.1899 | 59.0 | 36757 | 0.6842 | 0.7329 |
| 0.1872 | 60.0 | 37380 | 0.6747 | 0.7329 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230824082958 | dkqjrm | 2023-08-24T01:33:05Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-23T23:30:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824082958'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824082958
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5547
- Accuracy: 0.7581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1252 | 1.0 | 623 | 0.6915 | 0.5415 |
| 0.9382 | 2.0 | 1246 | 0.7221 | 0.5307 |
| 1.0555 | 3.0 | 1869 | 0.7387 | 0.5199 |
| 0.9336 | 4.0 | 2492 | 0.9751 | 0.6390 |
| 0.8894 | 5.0 | 3115 | 0.9277 | 0.6643 |
| 0.9066 | 6.0 | 3738 | 1.1836 | 0.6931 |
| 0.8496 | 7.0 | 4361 | 0.8242 | 0.7184 |
| 0.7761 | 8.0 | 4984 | 0.9061 | 0.6859 |
| 0.8175 | 9.0 | 5607 | 0.7474 | 0.7220 |
| 0.7575 | 10.0 | 6230 | 0.8582 | 0.7292 |
| 0.747 | 11.0 | 6853 | 0.8351 | 0.7256 |
| 0.728 | 12.0 | 7476 | 0.8912 | 0.7148 |
| 0.8296 | 13.0 | 8099 | 0.9471 | 0.7220 |
| 0.7327 | 14.0 | 8722 | 1.1407 | 0.7148 |
| 0.7284 | 15.0 | 9345 | 0.7681 | 0.7256 |
| 0.6642 | 16.0 | 9968 | 1.4084 | 0.6679 |
| 0.5888 | 17.0 | 10591 | 0.8413 | 0.7329 |
| 0.6074 | 18.0 | 11214 | 0.7461 | 0.7401 |
| 0.625 | 19.0 | 11837 | 0.9516 | 0.7545 |
| 0.5911 | 20.0 | 12460 | 1.3395 | 0.7292 |
| 0.5322 | 21.0 | 13083 | 1.3924 | 0.7509 |
| 0.5247 | 22.0 | 13706 | 1.1553 | 0.7256 |
| 0.5146 | 23.0 | 14329 | 1.6692 | 0.7040 |
| 0.4493 | 24.0 | 14952 | 1.2315 | 0.7437 |
| 0.399 | 25.0 | 15575 | 1.2710 | 0.7545 |
| 0.3644 | 26.0 | 16198 | 1.5049 | 0.7473 |
| 0.4031 | 27.0 | 16821 | 1.5735 | 0.7401 |
| 0.386 | 28.0 | 17444 | 1.4749 | 0.7220 |
| 0.3735 | 29.0 | 18067 | 0.9541 | 0.7365 |
| 0.356 | 30.0 | 18690 | 1.3936 | 0.7473 |
| 0.3496 | 31.0 | 19313 | 0.9982 | 0.7437 |
| 0.3149 | 32.0 | 19936 | 0.9572 | 0.7581 |
| 0.3094 | 33.0 | 20559 | 1.5663 | 0.7256 |
| 0.2886 | 34.0 | 21182 | 1.5993 | 0.7365 |
| 0.2545 | 35.0 | 21805 | 1.1515 | 0.7545 |
| 0.276 | 36.0 | 22428 | 1.2768 | 0.7473 |
| 0.2645 | 37.0 | 23051 | 1.4290 | 0.7509 |
| 0.262 | 38.0 | 23674 | 1.2363 | 0.7617 |
| 0.2261 | 39.0 | 24297 | 1.3446 | 0.7617 |
| 0.2291 | 40.0 | 24920 | 1.0532 | 0.7509 |
| 0.2178 | 41.0 | 25543 | 1.4745 | 0.7509 |
| 0.2104 | 42.0 | 26166 | 1.3830 | 0.7545 |
| 0.217 | 43.0 | 26789 | 1.7099 | 0.7473 |
| 0.214 | 44.0 | 27412 | 1.7054 | 0.7401 |
| 0.1856 | 45.0 | 28035 | 1.4350 | 0.7545 |
| 0.2014 | 46.0 | 28658 | 1.7266 | 0.7473 |
| 0.1759 | 47.0 | 29281 | 1.2659 | 0.7581 |
| 0.2027 | 48.0 | 29904 | 1.8336 | 0.7401 |
| 0.1871 | 49.0 | 30527 | 1.3398 | 0.7509 |
| 0.1586 | 50.0 | 31150 | 1.4948 | 0.7509 |
| 0.1619 | 51.0 | 31773 | 1.3787 | 0.7545 |
| 0.1665 | 52.0 | 32396 | 1.6532 | 0.7545 |
| 0.1786 | 53.0 | 33019 | 1.4697 | 0.7581 |
| 0.1609 | 54.0 | 33642 | 1.5462 | 0.7653 |
| 0.1304 | 55.0 | 34265 | 1.3577 | 0.7581 |
| 0.1576 | 56.0 | 34888 | 1.7004 | 0.7617 |
| 0.1522 | 57.0 | 35511 | 1.4629 | 0.7581 |
| 0.1496 | 58.0 | 36134 | 1.6336 | 0.7581 |
| 0.1406 | 59.0 | 36757 | 1.5699 | 0.7545 |
| 0.1268 | 60.0 | 37380 | 1.5547 | 0.7581 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nxnhjrjtbjfzhrovwl/limarp-llongma2-8k-ggml-f16 | nxnhjrjtbjfzhrovwl | 2023-08-24T01:12:05Z | 0 | 0 | null | [
"arxiv:2305.11206",
"license:agpl-3.0",
"region:us"
] | null | 2023-08-23T18:04:50Z | ---
'[object Object]': null
license: agpl-3.0
---
This repository contains the unquantized merge of [limarp-llongma2-8k lora](https://huggingface.co/lemonilia/limarp-llongma2-8k) in ggml format.
You can quantize the f16 ggml to the quantization of your choice by following the below steps:
1. Download and extract the [llama.cpp binaries](https://github.com/ggerganov/llama.cpp/releases/download/master-41c6741/llama-master-41c6741-bin-win-avx2-x64.zip) ([or compile it yourself if you're on Linux](https://github.com/ggerganov/llama.cpp#build))
2. Move the "quantize" executable to the same folder where you downloaded the f16 ggml model.
3. Open a command prompt window in that same folder and write the following command, making the changes that you see fit.
```bash
quantize.exe limarp-llongma2-13b.ggmlv3.f16.bin limarp-llongma2-13b.ggmlv3.q4_0.bin q4_0
```
4. Press enter to run the command and the quantized model will be generated in the folder.
The below are the contents of the original model card:
# Model Card for LimaRP-LLongMA2-8k-v2
LimaRP-LLongMA2-8k is an experimental [Llama2](https://huggingface.co/meta-llama) finetune narrowly focused on novel-style roleplay chatting, and a continuation of the previously released [LimaRP-llama2](https://huggingface.co/lemonilia/limarp-llama2) with a larger number of training tokens (+95%).
To considerably facilitate uploading, distribution and merging with other models, LoRA adapters are provided. LimaRP-LLongMA2 LoRA adapters, as their name suggests, are intended to be applied on LLongMA-2 models with 8k context ([7B](https://huggingface.co/conceptofmind/LLongMA-2-7b) and [13B](https://huggingface.co/conceptofmind/LLongMA-2-13b)) and their derivatives.
Data updates may be posted in the future. The current version is **v3**.
## Model Details
### Model Description
This is an experimental attempt at creating an RP-oriented fine-tune using a manually-curated, high-quality dataset of human-generated conversations. The main rationale for this are the observations from [Zhou et al.](https://arxiv.org/abs/2305.11206). The authors suggested that just 1000-2000 carefully curated training examples may yield high quality output for assistant-type chatbots. This is in contrast with the commonly employed strategy where a very large number of training examples (tens of thousands to even millions) of widely varying quality are used.
For LimaRP a similar approach was used, with the difference that the conversational data is almost entirely human-generated. Every training example is manually compiled and selected to comply with subjective quality parameters, with virtually no chance for OpenAI-style alignment responses to come up.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model is intended to approximate the experience of 1-on-1 roleplay as observed on many Internet forums dedicated on roleplaying. It _must_ be used with a specific format similar to that of this template:
```
<<SYSTEM>>
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length.
<<AIBOT>>
Character: {utterance}
<<HUMAN>>
User: {utterance}
[etc.]
```
With `<<SYSTEM>>`, `<<AIBOT>>` and `<<HUMAN>>` being special instruct-mode sequences. The text under curly braces must be replaced with appropriate text in _natural language_. Replace `User` and `Character` with actual character names.
This more graphical breakdown of the prompt format with a practical example might make it clearer:

### More detailed notes on prompt format, usage and other settings
- **The model has been tested mainly using Oobabooga's `text-generation-webui` as a backend**
- Preferably respect spacing and newlines shown above. This might not be possible yet with some frontends.
- Replace `Character` and `User` in the above template with your desired names.
- The scenario description has a large influence on what the character will do. Try to keep it more open-ended to lessen its impact.
- **The model expects users and characters to use third-person narration in simple past and enclose dialogues with standard quotation marks `" "`.** Other formats are not supported (= not in the training data).
- Do not use newlines in Persona and Scenario. Use natural language.
- The last line in `<<SYSTEM>>` does not need to be written exactly as depicted, but should mention that `Character` and `User` will engage in roleplay and specify the length of `Character`'s messages
- The message lengths used during training are: `tiny`, `short`, `average`, `long`, `huge`, `humongous`. However, there might not have been enough training examples for each length for this instruction to have a significant impact. The preferred lengths for this type of role-playing are `average` or `long`.
- Suggested text generation settings:
- Temperature ~0.70
- Tail-Free Sampling 0.85
- Repetition penalty ~1.10 (Compared to LLaMAv1, Llama2 appears to require a somewhat higher rep.pen.)
- Not used: Top-P (disabled/set to 1.0), Top-K (disabled/set to 0), Typical P (disabled/set to 1.0)
### Sample character cards
Here are a few example **SillyTavern character cards** following the required format; download and import into SillyTavern. Feel free to modify and adapt them to your purposes.
- [Carina, a 'big sister' android maid](https://files.catbox.moe/1qcqqj.png)
- [Charlotte, a cute android maid](https://files.catbox.moe/k1x9a7.png)
- [Etma, an 'aligned' AI assistant](https://files.catbox.moe/dj8ggi.png)
- [Mila, an anthro pet catgirl](https://files.catbox.moe/amnsew.png)
- [Samuel, a handsome vampire](https://files.catbox.moe/f9uiw1.png)
And here is a sample of how the model is intended to behave with proper chat and prompt formatting: https://files.catbox.moe/egfd90.png
### Other tips
It's possible to make the model automatically generate random character information and scenario by adding just `<<SYSTEM>>` and the character name in text completion mode in `text-generation-webui`, as done here (click to enlarge). The format generally closely matches that of the training data:

### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The model has not been tested for:
- IRC-style chat
- Markdown-style roleplay (asterisks for actions, dialogue lines without quotation marks)
- Storywriting
- Usage without the suggested prompt format
Furthermore, the model is not intended nor expected to provide factual and accurate information on any subject.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
The model may easily output disturbing and socially inappropriate content and therefore should not be used by minors or within environments where a general audience is expected. Its outputs will have in general a strong NSFW bias unless the character card/description de-emphasizes it.
## How to Get Started with the Model
Download and load with `text-generation-webui` as a back-end application. It's suggested to start the `webui` via command line. Assuming you have copied the LoRA files under a subdirectory called `lora/limarp-llongma2-7b`, you would use something like this for the 7B model:
```
python server.py --api --verbose --model LLongMA-7B --lora limarp-llongma2-7b
```
When using 4-bit `bitsnbytes` it is suggested to use double quantization to increase accuracy. The starting command may be something like this:
```
python server.py --verbose --api --model LLongMA-2-13B --lora limarp13-llongma2-13b --load-in-4bit --use_double_quant
```
Then, preferably use [SillyTavern](https://github.com/SillyTavern/SillyTavern) as a front-end using the following settings:

In addition of enabling the instruct mode with the correct sequences, it's particularly important to **enable "Include names"**, as the model was trained with them at the start of each utterance. If it's disabled, the model can start getting confused and often write for the user in its responses.
To take advantage of this model's larger context length, unlock the context size and set it up to any length up to 8192 tokens, depending on your VRAM constraints. On most consumer GPUs this will likely need to be set to a lower value.

It is **recommended to ban/disable the EOS token** as it can for instance apparently give [artifacts or tokenization issues](https://files.catbox.moe/cxfrzu.png) when it ends up getting generated close to punctuation or quotation marks, at least in SillyTavern. These would typically happen
with AI responses.

## Training Details
### Training Data
The training data comprises about **1500** manually edited roleplaying conversation threads from various Internet RP forums, for about **24 megabytes** of data.
Character and Scenario information was initially filled in for every thread with the help of mainly `gpt-4`. Later on this has been accomplished with a custom summarizer. Conversations in the dataset are almost entirely human-generated except for a handful of messages. Character names in the RP stories have been isolated and replaced with standard placeholder strings. Usernames, out-of-context (OOC) messages and personal information have not been intentionally included.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The version of LimaRP uploaded on this repository was trained using a small NVidia A40 cluster in 8-bit with regular LoRA adapters and 8-bit AdamW optimizer.
#### Training Hyperparameters
The most important settings were as follows:
- --learning_rate 0.000065
- --lr_scheduler_type cosine
- --lora_r 8
- --lora_alpha 16
- --lora_dropout 0.01
- --num_train_epochs 2
- --bf16 True
- --tf32 True
- --bits 8
- --per_device_train_batch_size 1
- --gradient_accumulation_steps 1
- --optim adamw_bnb_8bit
**All linear LoRA layers** were targeted.
An effective batch size of 1 was found to yield the lowest loss curves during fine-tuning. It was also found that using `--train_on_source False` with the entire training example at the output yields similar results. These LoRAs have been trained in this way (similar to what was done with [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) or as with unsupervised finetuning).
<!-- ## Evaluation -->
<!-- This section describes the evaluation protocols and provides the results. -->
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Finetuning this model on 8 NVidia A40 48GB in parallel takes about 25 minutes (7B) or 45 minutes (13B). |
davidggphy/whisper-tiny-finetuned-minds14-enUS | davidggphy | 2023-08-24T01:07:40Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-23T23:30:00Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-finetuned-minds14-enUS_2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
metrics:
- name: Wer
type: wer
value: 0.33943329397874855
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-minds14-enUS_2
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7508
- Wer Ortho: 0.3356
- Wer: 0.3394
- Cer: 0.2613
- Cer Ortho: 0.2623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | Cer | Cer Ortho |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:---------:|
| 0.0136 | 7.14 | 100 | 0.6142 | 0.3362 | 0.3388 | 0.2587 | 0.2614 |
| 0.0009 | 14.29 | 200 | 0.6704 | 0.3288 | 0.3300 | 0.2515 | 0.2534 |
| 0.0011 | 21.43 | 300 | 0.6858 | 0.3054 | 0.3093 | 0.2363 | 0.2374 |
| 0.0005 | 28.57 | 400 | 0.7081 | 0.3455 | 0.3477 | 0.2699 | 0.2711 |
| 0.0004 | 35.71 | 500 | 0.7191 | 0.3467 | 0.3501 | 0.2727 | 0.2736 |
| 0.0001 | 42.86 | 600 | 0.7337 | 0.3405 | 0.3447 | 0.2652 | 0.2662 |
| 0.0001 | 50.0 | 700 | 0.7418 | 0.3393 | 0.3430 | 0.2636 | 0.2645 |
| 0.0001 | 57.14 | 800 | 0.7466 | 0.3387 | 0.3424 | 0.2634 | 0.2644 |
| 0.0001 | 64.29 | 900 | 0.7496 | 0.3350 | 0.3388 | 0.2604 | 0.2614 |
| 0.0001 | 71.43 | 1000 | 0.7508 | 0.3356 | 0.3394 | 0.2613 | 0.2623 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
michaelriedl/MonsterForgeFusion-sd-2-base | michaelriedl | 2023-08-24T01:06:20Z | 5 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-base",
"base_model:adapter:stabilityai/stable-diffusion-2-base",
"license:openrail++",
"region:us"
] | text-to-image | 2023-08-24T00:46:11Z | ---
license: openrail++
base_model: stabilityai/stable-diffusion-2-base
tags:
- stable-diffusion
- text-to-image
- diffusers
- lora
inference: true
--- |
LBR47/wav2vec2-base-finetuned-gtzan | LBR47 | 2023-08-24T01:05:57Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:bookbot/distil-ast-audioset",
"base_model:finetune:bookbot/distil-ast-audioset",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-08-14T04:15:04Z | ---
license: apache-2.0
base_model: bookbot/distil-ast-audioset
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: train
split: train
args: train
metrics:
- name: Accuracy
type: accuracy
value: 0.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-ast-audioset-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7907
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3 |
aiknight87/falcon-40b-instruct-test-system | aiknight87 | 2023-08-24T00:53:18Z | 1 | 0 | peft | [
"peft",
"RefinedWeb",
"custom_code",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2023-08-23T06:41:11Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
jimmyofdoom/a2c-PandaReachDense-v3 | jimmyofdoom | 2023-08-24T00:48:10Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-24T00:42:51Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.14
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DuyTa/vi_whisper-small | DuyTa | 2023-08-24T00:33:35Z | 80 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-13T14:16:43Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: vi_whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Vivos + Commonvoice
type: vivos
config: None
split: None
metrics:
- name: Wer
type: wer
value: 21.8855
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi_whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Mixing of Vivos and CommonVoice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2894
- Wer: 21.8855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
In training phase i used VIVOS dataset and cleaned CommonVoice
The VIVOS evaluation dataset was used
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 8000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.249 | 1.1 | 1000 | 0.3766 | 32.1678 |
| 0.1416 | 2.2 | 2000 | 0.2881 | 46.4646 |
| 0.0839 | 3.3 | 3000 | 0.2799 | 22.7791 |
| 0.0546 | 4.41 | 4000 | 0.2894 | 21.8855 |
| 0.0256 | 5.51 | 5000 | 0.3023 | 32.2973 |
| 0.0111 | 6.61 | 6000 | 0.3061 | 31.0153 |
| 0.0028 | 7.71 | 7000 | 0.3143 | 27.1691 |
| 0.0014 | 8.81 | 8000 | 0.3187 | 27.3634 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pabloyesteb/a2c-PandaReachDense-v3 | pabloyesteb | 2023-08-24T00:21:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-24T00:15:07Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ardt-multipart/ardt-multipart-combo_train_walker2d_v2-2308_2328-99 | ardt-multipart | 2023-08-24T00:20:53Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-23T22:30:17Z | ---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-combo_train_walker2d_v2-2308_2328-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-combo_train_walker2d_v2-2308_2328-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nlpnlp/xlm-roberta-base-finetuned-panx-de | nlpnlp | 2023-08-24T00:04:07Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-08-23T17:08:22Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8600170502983802
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1391
- F1: 0.8600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2598 | 1.0 | 525 | 0.1697 | 0.8177 |
| 0.1253 | 2.0 | 1050 | 0.1343 | 0.8509 |
| 0.0812 | 3.0 | 1575 | 0.1391 | 0.8600 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
azhang1212/angela_punc_untranslated_eval | azhang1212 | 2023-08-23T23:44:42Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-08-23T20:30:13Z | ---
license: mit
base_model: Davlan/afro-xlmr-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: angela_punc_untranslated_eval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# angela_punc_untranslated_eval
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1902
- Precision: 0.3889
- Recall: 0.2568
- F1: 0.3093
- Accuracy: 0.9517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1524 | 1.0 | 1283 | 0.1547 | 0.4163 | 0.1471 | 0.2174 | 0.9546 |
| 0.1295 | 2.0 | 2566 | 0.1518 | 0.4489 | 0.1943 | 0.2712 | 0.9556 |
| 0.1113 | 3.0 | 3849 | 0.1614 | 0.4152 | 0.2323 | 0.2979 | 0.9538 |
| 0.0896 | 4.0 | 5132 | 0.1784 | 0.4248 | 0.2346 | 0.3023 | 0.9542 |
| 0.073 | 5.0 | 6415 | 0.1902 | 0.3889 | 0.2568 | 0.3093 | 0.9517 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dkqjrm/20230824064444 | dkqjrm | 2023-08-23T23:38:44Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-23T21:45:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824064444'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824064444
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0709
- Accuracy: 0.7329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.4733 | 0.5307 |
| 0.3538 | 2.0 | 624 | 0.1917 | 0.5126 |
| 0.3538 | 3.0 | 936 | 0.1696 | 0.5560 |
| 0.2775 | 4.0 | 1248 | 0.1700 | 0.5271 |
| 0.2538 | 5.0 | 1560 | 0.3497 | 0.5343 |
| 0.2538 | 6.0 | 1872 | 0.2183 | 0.5632 |
| 0.259 | 7.0 | 2184 | 0.1783 | 0.5018 |
| 0.259 | 8.0 | 2496 | 0.2321 | 0.5848 |
| 0.2587 | 9.0 | 2808 | 0.2081 | 0.6101 |
| 0.2211 | 10.0 | 3120 | 0.1194 | 0.6715 |
| 0.2211 | 11.0 | 3432 | 0.1505 | 0.6390 |
| 0.198 | 12.0 | 3744 | 0.1130 | 0.7004 |
| 0.1939 | 13.0 | 4056 | 0.1187 | 0.6679 |
| 0.1939 | 14.0 | 4368 | 0.1175 | 0.6787 |
| 0.1687 | 15.0 | 4680 | 0.1092 | 0.7040 |
| 0.1687 | 16.0 | 4992 | 0.0984 | 0.7076 |
| 0.1511 | 17.0 | 5304 | 0.1032 | 0.7076 |
| 0.1448 | 18.0 | 5616 | 0.1024 | 0.7401 |
| 0.1448 | 19.0 | 5928 | 0.0902 | 0.7112 |
| 0.1392 | 20.0 | 6240 | 0.0972 | 0.7112 |
| 0.1283 | 21.0 | 6552 | 0.0880 | 0.7184 |
| 0.1283 | 22.0 | 6864 | 0.0892 | 0.7329 |
| 0.1257 | 23.0 | 7176 | 0.1156 | 0.7401 |
| 0.1257 | 24.0 | 7488 | 0.0940 | 0.7329 |
| 0.1215 | 25.0 | 7800 | 0.0876 | 0.7401 |
| 0.1184 | 26.0 | 8112 | 0.1289 | 0.7437 |
| 0.1184 | 27.0 | 8424 | 0.0808 | 0.7256 |
| 0.1112 | 28.0 | 8736 | 0.0823 | 0.7401 |
| 0.1139 | 29.0 | 9048 | 0.0838 | 0.7256 |
| 0.1139 | 30.0 | 9360 | 0.0855 | 0.7220 |
| 0.1095 | 31.0 | 9672 | 0.0813 | 0.7256 |
| 0.1095 | 32.0 | 9984 | 0.0765 | 0.7256 |
| 0.106 | 33.0 | 10296 | 0.0847 | 0.7365 |
| 0.1034 | 34.0 | 10608 | 0.0844 | 0.7509 |
| 0.1034 | 35.0 | 10920 | 0.0811 | 0.7184 |
| 0.0991 | 36.0 | 11232 | 0.0811 | 0.7292 |
| 0.0938 | 37.0 | 11544 | 0.0847 | 0.7365 |
| 0.0938 | 38.0 | 11856 | 0.0824 | 0.7256 |
| 0.0973 | 39.0 | 12168 | 0.0760 | 0.7292 |
| 0.0973 | 40.0 | 12480 | 0.0786 | 0.7220 |
| 0.0908 | 41.0 | 12792 | 0.0732 | 0.7473 |
| 0.0894 | 42.0 | 13104 | 0.0763 | 0.7401 |
| 0.0894 | 43.0 | 13416 | 0.0811 | 0.7365 |
| 0.0896 | 44.0 | 13728 | 0.0734 | 0.7473 |
| 0.0882 | 45.0 | 14040 | 0.0747 | 0.7329 |
| 0.0882 | 46.0 | 14352 | 0.0729 | 0.7401 |
| 0.0847 | 47.0 | 14664 | 0.0723 | 0.7329 |
| 0.0847 | 48.0 | 14976 | 0.0748 | 0.7401 |
| 0.0854 | 49.0 | 15288 | 0.0755 | 0.7292 |
| 0.0813 | 50.0 | 15600 | 0.0715 | 0.7329 |
| 0.0813 | 51.0 | 15912 | 0.0719 | 0.7292 |
| 0.0845 | 52.0 | 16224 | 0.0721 | 0.7401 |
| 0.0821 | 53.0 | 16536 | 0.0711 | 0.7292 |
| 0.0821 | 54.0 | 16848 | 0.0714 | 0.7437 |
| 0.0802 | 55.0 | 17160 | 0.0711 | 0.7401 |
| 0.0802 | 56.0 | 17472 | 0.0718 | 0.7329 |
| 0.0798 | 57.0 | 17784 | 0.0708 | 0.7220 |
| 0.0796 | 58.0 | 18096 | 0.0715 | 0.7365 |
| 0.0796 | 59.0 | 18408 | 0.0712 | 0.7329 |
| 0.0806 | 60.0 | 18720 | 0.0709 | 0.7329 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230824062849 | dkqjrm | 2023-08-23T23:29:46Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-23T21:29:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824062849'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824062849
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2256
- Accuracy: 0.7473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 1.2170 | 0.5307 |
| 0.9844 | 2.0 | 624 | 0.7365 | 0.5090 |
| 0.9844 | 3.0 | 936 | 0.6978 | 0.5632 |
| 0.8956 | 4.0 | 1248 | 0.8855 | 0.4765 |
| 0.8957 | 5.0 | 1560 | 1.0223 | 0.5379 |
| 0.8957 | 6.0 | 1872 | 0.6873 | 0.6137 |
| 0.7665 | 7.0 | 2184 | 0.8629 | 0.6173 |
| 0.7665 | 8.0 | 2496 | 0.6861 | 0.6570 |
| 0.734 | 9.0 | 2808 | 0.6714 | 0.7076 |
| 0.7238 | 10.0 | 3120 | 0.6298 | 0.7184 |
| 0.7238 | 11.0 | 3432 | 0.5975 | 0.7184 |
| 0.6786 | 12.0 | 3744 | 0.8311 | 0.6968 |
| 0.6396 | 13.0 | 4056 | 0.7136 | 0.6751 |
| 0.6396 | 14.0 | 4368 | 0.7183 | 0.6859 |
| 0.6481 | 15.0 | 4680 | 0.6652 | 0.7076 |
| 0.6481 | 16.0 | 4992 | 1.0367 | 0.6823 |
| 0.6106 | 17.0 | 5304 | 0.7197 | 0.6895 |
| 0.6011 | 18.0 | 5616 | 0.6058 | 0.7292 |
| 0.6011 | 19.0 | 5928 | 0.7227 | 0.7112 |
| 0.5978 | 20.0 | 6240 | 1.1472 | 0.6570 |
| 0.5309 | 21.0 | 6552 | 0.6741 | 0.7256 |
| 0.5309 | 22.0 | 6864 | 0.9335 | 0.6787 |
| 0.5392 | 23.0 | 7176 | 0.8296 | 0.7365 |
| 0.5392 | 24.0 | 7488 | 0.9097 | 0.7040 |
| 0.5058 | 25.0 | 7800 | 0.8278 | 0.7292 |
| 0.4669 | 26.0 | 8112 | 1.0859 | 0.6498 |
| 0.4669 | 27.0 | 8424 | 0.9387 | 0.7184 |
| 0.462 | 28.0 | 8736 | 1.0893 | 0.7365 |
| 0.4757 | 29.0 | 9048 | 1.3568 | 0.6859 |
| 0.4757 | 30.0 | 9360 | 1.0252 | 0.7040 |
| 0.4237 | 31.0 | 9672 | 1.0489 | 0.7329 |
| 0.4237 | 32.0 | 9984 | 0.8661 | 0.7292 |
| 0.4275 | 33.0 | 10296 | 0.9781 | 0.7437 |
| 0.3722 | 34.0 | 10608 | 0.8879 | 0.7329 |
| 0.3722 | 35.0 | 10920 | 0.9932 | 0.7292 |
| 0.3741 | 36.0 | 11232 | 1.0509 | 0.7365 |
| 0.3358 | 37.0 | 11544 | 1.3875 | 0.7329 |
| 0.3358 | 38.0 | 11856 | 1.2366 | 0.7220 |
| 0.3415 | 39.0 | 12168 | 1.0563 | 0.7329 |
| 0.3415 | 40.0 | 12480 | 0.9688 | 0.7401 |
| 0.3357 | 41.0 | 12792 | 0.8598 | 0.7329 |
| 0.3094 | 42.0 | 13104 | 1.0506 | 0.7329 |
| 0.3094 | 43.0 | 13416 | 1.3257 | 0.7365 |
| 0.2947 | 44.0 | 13728 | 1.1759 | 0.7365 |
| 0.2832 | 45.0 | 14040 | 1.1699 | 0.7329 |
| 0.2832 | 46.0 | 14352 | 1.1070 | 0.7401 |
| 0.2808 | 47.0 | 14664 | 1.1519 | 0.7473 |
| 0.2808 | 48.0 | 14976 | 1.0674 | 0.7401 |
| 0.2715 | 49.0 | 15288 | 1.1491 | 0.7401 |
| 0.252 | 50.0 | 15600 | 1.0819 | 0.7473 |
| 0.252 | 51.0 | 15912 | 0.9650 | 0.7473 |
| 0.2577 | 52.0 | 16224 | 1.0753 | 0.7437 |
| 0.2579 | 53.0 | 16536 | 1.0896 | 0.7473 |
| 0.2579 | 54.0 | 16848 | 1.0579 | 0.7401 |
| 0.2395 | 55.0 | 17160 | 1.1172 | 0.7509 |
| 0.2395 | 56.0 | 17472 | 1.1540 | 0.7509 |
| 0.2392 | 57.0 | 17784 | 1.2162 | 0.7509 |
| 0.22 | 58.0 | 18096 | 1.1978 | 0.7509 |
| 0.22 | 59.0 | 18408 | 1.2381 | 0.7473 |
| 0.2242 | 60.0 | 18720 | 1.2256 | 0.7473 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DrishtiSharma/codet5-small-Generate-docstrings-for-Python-bs-32 | DrishtiSharma | 2023-08-23T23:28:11Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:finetune:Salesforce/codet5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-23T16:05:23Z | ---
license: apache-2.0
base_model: Salesforce/codet5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: codet5-small-Generate-docstrings-for-Python-bs-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-small-Generate-docstrings-for-Python-bs-32
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1105
- Rouge1: 0.3307
- Rouge2: 0.16
- Rougel: 0.297
- Rougelsum: 0.3149
- Gen Len: 16.7441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.7701 | 1.0 | 4472 | 2.3322 | 0.3225 | 0.1525 | 0.2894 | 0.3067 | 16.3153 |
| 2.4907 | 2.0 | 8944 | 2.2464 | 0.328 | 0.1555 | 0.293 | 0.3119 | 17.0097 |
| 2.405 | 3.0 | 13416 | 2.2004 | 0.3267 | 0.1562 | 0.2934 | 0.311 | 16.4531 |
| 2.3512 | 4.0 | 17888 | 2.1696 | 0.3292 | 0.1571 | 0.2944 | 0.3134 | 17.3872 |
| 2.3144 | 5.0 | 22360 | 2.1503 | 0.3293 | 0.1586 | 0.2954 | 0.3137 | 16.932 |
| 2.2862 | 6.0 | 26832 | 2.1355 | 0.3307 | 0.1588 | 0.2962 | 0.3149 | 17.0269 |
| 2.2666 | 7.0 | 31304 | 2.1246 | 0.33 | 0.1594 | 0.2962 | 0.3144 | 16.7064 |
| 2.2514 | 8.0 | 35776 | 2.1163 | 0.3305 | 0.1595 | 0.2968 | 0.3145 | 16.4765 |
| 2.2401 | 9.0 | 40248 | 2.1120 | 0.3305 | 0.1595 | 0.2967 | 0.3147 | 16.763 |
| 2.2333 | 10.0 | 44720 | 2.1105 | 0.3307 | 0.16 | 0.297 | 0.3149 | 16.7441 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4.dev0
- Tokenizers 0.13.3
|
NobodyExistsOnTheInternet/convenience2epochs | NobodyExistsOnTheInternet | 2023-08-23T23:22:33Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-23T23:21:27Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
tenkomati/dqn-SpaceInvaderstest | tenkomati | 2023-08-23T23:07:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-23T23:07:18Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 652.00 +/- 219.36
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tenkomati -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tenkomati -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga tenkomati
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ghze/dqn-SpaceInvadersNoFrameskip-v4 | ghze | 2023-08-23T22:53:37Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-23T22:52:57Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 573.50 +/- 132.53
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ghze -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ghze -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ghze
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
sabre-code/distilbert-base-uncased-finetuned-emotion | sabre-code | 2023-08-23T22:19:49Z | 121 | 1 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:dair-ai/emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-23T20:23:59Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- dair-ai/emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
language:
- en
metrics:
- accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3 |
redstonehero/anythingqingmix25d_v30 | redstonehero | 2023-08-23T22:07:47Z | 29 | 1 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-23T21:10:44Z | ---
license: creativeml-openrail-m
library_name: diffusers
--- |
redstonehero/comimicry_v10fp16 | redstonehero | 2023-08-23T22:07:44Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-23T21:10:35Z | ---
license: creativeml-openrail-m
library_name: diffusers
--- |
redstonehero/airfuckswildmix_v10 | redstonehero | 2023-08-23T22:07:39Z | 41 | 2 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-23T21:10:56Z | ---
license: creativeml-openrail-m
library_name: diffusers
--- |
redstonehero/furryvixens_v20bakedvae | redstonehero | 2023-08-23T21:42:47Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-23T20:44:31Z | ---
license: creativeml-openrail-m
library_name: diffusers
--- |
felipebandeira/donutlicenses3v3 | felipebandeira | 2023-08-23T21:40:06Z | 114 | 4 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"image-to-text",
"en",
"dataset:felipebandeira/driverlicenses2k",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | 2023-08-16T12:35:01Z | ---
license: mit
datasets:
- felipebandeira/driverlicenses2k
language:
- en
metrics:
- accuracy
pipeline_tag: image-to-text
---
This model extracts information from EU driver's licenses and returns it as JSON. For optimal performance, we recommend that input images:
- have a size of 1192x772
- have high resolution and do not contain light reflection effects
Accuracy
- on validation set: 98%
- on set of real licenses: 63.93%
Article describing model:
https://medium.com/@ofelipebandeira/transformers-vs-ocr-who-can-read-better-192e6b044dd3
Article describing synthetic dataset used in training:
https://python.plainenglish.io/how-to-create-synthetic-datasets-of-document-images-5f140dee5e40 |
redstonehero/frozenanimation_v10 | redstonehero | 2023-08-23T21:36:07Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-23T20:44:15Z | ---
license: creativeml-openrail-m
library_name: diffusers
--- |
him1411/EDGAR-BART-Base | him1411 | 2023-08-23T21:35:55Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"en",
"dataset:him1411/EDGAR10-Q",
"arxiv:2109.08079",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-04-03T18:32:38Z | ---
license: mit
datasets:
- him1411/EDGAR10-Q
language:
- en
metrics:
- rouge
---
license: mit
language:
- en
tags:
- finance
- ContextNER
- language models
datasets:
- him1411/EDGAR10-Q
metrics:
- rouge
---
EDGAR-BART-Base
=============
BART base model finetuned on [EDGAR10-Q dataset](https://huggingface.co/datasets/him1411/EDGAR10-Q)
You may want to check out
* Our paper: [CONTEXT-NER: Contextual Phrase Generation at Scale](https://arxiv.org/abs/2109.08079/)
* GitHub: [Click Here](https://github.com/him1411/edgar10q-dataset)
Direct Use
=============
It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities. **It should not be directly used for production or work that may directly impact people.**
How to Use
=============
You can very easily load the models with Transformers, instead of downloading them manually. The [bart-base model](https://huggingface.co/facebook/bart-base) is the backbone of our model. Here is how to use the model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("him1411/EDGAR-BART-Base")
model = AutoModelForSeq2SeqLM.from_pretrained("him1411/EDGAR-BART-Base")
```
Or just clone the model repo
```
git lfs install
git clone https://huggingface.co/him1411/EDGAR-BART-Base
```
Inference Example
=============
Here, we provide an example for the "ContextNER" task. Below is an example of one instance.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("him1411/EDGAR-BART-Base")
model = AutoModelForSeq2SeqLM.from_pretrained("him1411/EDGAR-BART-Base")
# Input shows how we have appended instruction from our file for HoC dataset with instance.
input = "14.5 years . The definite lived intangible assets related to the contracts and trade names had estimated weighted average useful lives of 5.9 years and 14.5 years, respectively, at acquisition."
tokenized_input= tokenizer(input)
# Ideal output for this input is 'Definite lived intangible assets weighted average remaining useful life'
output = model(tokenized_input)
```
BibTeX Entry and Citation Info
===============
If you are using our model, please cite our paper:
```bibtex
@article{gupta2021context,
title={Context-NER: Contextual Phrase Generation at Scale},
author={Gupta, Himanshu and Verma, Shreyas and Kumar, Tarun and Mishra, Swaroop and Agrawal, Tamanna and Badugu, Amogh and Bhatt, Himanshu Sharad},
journal={arXiv preprint arXiv:2109.08079},
year={2021}
}
``` |
ofri-r/ppo-Huggy | ofri-r | 2023-08-23T21:32:26Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-08-23T21:32:20Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ofri-r/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
daochf/Lora-HuggyLlama7b-PuceDS-v03x50 | daochf | 2023-08-23T21:32:22Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-23T21:32:16Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
dkqjrm/20230824042730 | dkqjrm | 2023-08-23T21:28:35Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-23T19:27:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230824042730'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230824042730
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5547
- Accuracy: 0.7581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1252 | 1.0 | 623 | 0.6915 | 0.5415 |
| 0.9382 | 2.0 | 1246 | 0.7221 | 0.5307 |
| 1.0555 | 3.0 | 1869 | 0.7387 | 0.5199 |
| 0.9336 | 4.0 | 2492 | 0.9751 | 0.6390 |
| 0.8894 | 5.0 | 3115 | 0.9277 | 0.6643 |
| 0.9066 | 6.0 | 3738 | 1.1836 | 0.6931 |
| 0.8496 | 7.0 | 4361 | 0.8242 | 0.7184 |
| 0.7761 | 8.0 | 4984 | 0.9061 | 0.6859 |
| 0.8175 | 9.0 | 5607 | 0.7474 | 0.7220 |
| 0.7575 | 10.0 | 6230 | 0.8582 | 0.7292 |
| 0.747 | 11.0 | 6853 | 0.8351 | 0.7256 |
| 0.728 | 12.0 | 7476 | 0.8912 | 0.7148 |
| 0.8296 | 13.0 | 8099 | 0.9471 | 0.7220 |
| 0.7327 | 14.0 | 8722 | 1.1407 | 0.7148 |
| 0.7284 | 15.0 | 9345 | 0.7681 | 0.7256 |
| 0.6642 | 16.0 | 9968 | 1.4084 | 0.6679 |
| 0.5888 | 17.0 | 10591 | 0.8413 | 0.7329 |
| 0.6074 | 18.0 | 11214 | 0.7461 | 0.7401 |
| 0.625 | 19.0 | 11837 | 0.9516 | 0.7545 |
| 0.5911 | 20.0 | 12460 | 1.3395 | 0.7292 |
| 0.5322 | 21.0 | 13083 | 1.3924 | 0.7509 |
| 0.5247 | 22.0 | 13706 | 1.1553 | 0.7256 |
| 0.5146 | 23.0 | 14329 | 1.6692 | 0.7040 |
| 0.4493 | 24.0 | 14952 | 1.2315 | 0.7437 |
| 0.399 | 25.0 | 15575 | 1.2710 | 0.7545 |
| 0.3644 | 26.0 | 16198 | 1.5049 | 0.7473 |
| 0.4031 | 27.0 | 16821 | 1.5735 | 0.7401 |
| 0.386 | 28.0 | 17444 | 1.4749 | 0.7220 |
| 0.3735 | 29.0 | 18067 | 0.9541 | 0.7365 |
| 0.356 | 30.0 | 18690 | 1.3936 | 0.7473 |
| 0.3496 | 31.0 | 19313 | 0.9982 | 0.7437 |
| 0.3149 | 32.0 | 19936 | 0.9572 | 0.7581 |
| 0.3094 | 33.0 | 20559 | 1.5663 | 0.7256 |
| 0.2886 | 34.0 | 21182 | 1.5993 | 0.7365 |
| 0.2545 | 35.0 | 21805 | 1.1515 | 0.7545 |
| 0.276 | 36.0 | 22428 | 1.2768 | 0.7473 |
| 0.2645 | 37.0 | 23051 | 1.4290 | 0.7509 |
| 0.262 | 38.0 | 23674 | 1.2363 | 0.7617 |
| 0.2261 | 39.0 | 24297 | 1.3446 | 0.7617 |
| 0.2291 | 40.0 | 24920 | 1.0532 | 0.7509 |
| 0.2178 | 41.0 | 25543 | 1.4745 | 0.7509 |
| 0.2104 | 42.0 | 26166 | 1.3830 | 0.7545 |
| 0.217 | 43.0 | 26789 | 1.7099 | 0.7473 |
| 0.214 | 44.0 | 27412 | 1.7054 | 0.7401 |
| 0.1856 | 45.0 | 28035 | 1.4350 | 0.7545 |
| 0.2014 | 46.0 | 28658 | 1.7266 | 0.7473 |
| 0.1759 | 47.0 | 29281 | 1.2659 | 0.7581 |
| 0.2027 | 48.0 | 29904 | 1.8336 | 0.7401 |
| 0.1871 | 49.0 | 30527 | 1.3398 | 0.7509 |
| 0.1586 | 50.0 | 31150 | 1.4948 | 0.7509 |
| 0.1619 | 51.0 | 31773 | 1.3787 | 0.7545 |
| 0.1665 | 52.0 | 32396 | 1.6532 | 0.7545 |
| 0.1786 | 53.0 | 33019 | 1.4697 | 0.7581 |
| 0.1609 | 54.0 | 33642 | 1.5462 | 0.7653 |
| 0.1304 | 55.0 | 34265 | 1.3577 | 0.7581 |
| 0.1576 | 56.0 | 34888 | 1.7004 | 0.7617 |
| 0.1522 | 57.0 | 35511 | 1.4629 | 0.7581 |
| 0.1496 | 58.0 | 36134 | 1.6336 | 0.7581 |
| 0.1406 | 59.0 | 36757 | 1.5699 | 0.7545 |
| 0.1268 | 60.0 | 37380 | 1.5547 | 0.7581 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
casonshep/spam_message_classification | casonshep | 2023-08-23T21:19:27Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-23T21:14:40Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: spam_message_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spam_message_classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 297 | 0.0719 | 0.9757 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Kajtson/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan | Kajtson | 2023-08-23T21:10:18Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-08-23T20:22:24Z | ---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6857
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8586 | 1.0 | 450 | 1.3795 | 0.55 |
| 0.7835 | 2.0 | 900 | 1.0814 | 0.76 |
| 0.1489 | 3.0 | 1350 | 1.0447 | 0.81 |
| 0.2136 | 4.0 | 1800 | 0.9784 | 0.82 |
| 0.0001 | 5.0 | 2250 | 0.7678 | 0.86 |
| 0.0 | 6.0 | 2700 | 0.5670 | 0.92 |
| 1.2125 | 7.0 | 3150 | 0.8058 | 0.85 |
| 0.0 | 8.0 | 3600 | 0.7256 | 0.87 |
| 0.0 | 9.0 | 4050 | 0.6878 | 0.89 |
| 0.0 | 10.0 | 4500 | 0.6857 | 0.89 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
yellowsproket/path-to-save-model | yellowsproket | 2023-08-23T21:02:29Z | 30 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-23T20:54:37Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - yellowsproket/path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
vuminhtue/bert-finetuned-squad | vuminhtue | 2023-08-23T21:02:09Z | 70 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-08-23T18:52:58Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: vuminhtue/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vuminhtue/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5714
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2738 | 0 |
| 0.7819 | 1 |
| 0.5714 | 2 |
### Framework versions
- Transformers 4.32.0
- TensorFlow 2.9.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
marhatha/ppo-LunarLander-v2 | marhatha | 2023-08-23T21:01:14Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-23T21:00:46Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.81 +/- 20.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits