modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-22 12:28:33
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-22 12:28:03
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tyavika/Distilbert-QA-Pytorch-FULL | tyavika | 2023-07-16T16:06:16Z | 131 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-28T01:54:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Distilbert-QA-Pytorch-FULL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distilbert-QA-Pytorch-FULL
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.297 | 1.0 | 3290 | 1.1823 |
| 0.9448 | 2.0 | 6580 | 1.1464 |
| 0.6704 | 3.0 | 9870 | 1.2624 |
| 0.4478 | 4.0 | 13160 | 1.4175 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tyavika/Bert-QA-Pytorch-FULL | tyavika | 2023-07-16T16:05:57Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-28T02:19:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Bert-QA-Pytorch-FULL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert-QA-Pytorch-FULL
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1633 | 1.0 | 3290 | 1.0515 |
| 0.8061 | 2.0 | 6580 | 1.0593 |
| 0.533 | 3.0 | 9870 | 1.2154 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ailabturkiye/NecmettinErbakan | ailabturkiye | 2023-07-16T16:05:52Z | 0 | 0 | null | [
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T16:02:15Z | ---
license: openrail
language:
- tr
tags:
- music
---
Modeli kullanarak oluşturulan hiç bir ses hakkında sorumluluk bana ait değildir.
|
casque/Creampie_v11 | casque | 2023-07-16T16:05:41Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T16:03:25Z | ---
license: creativeml-openrail-m
---
|
ailabturkiye/orkundk | ailabturkiye | 2023-07-16T16:03:20Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-16T16:02:09Z | ---
license: openrail
language:
- tr
tags:
- music
Orkundk (500 Epoch)
|
tyavika/lr1e5_bs16_layer1_Bert_CNN128LSTM64NoBid | tyavika | 2023-07-16T16:02:27Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-12T15:52:35Z | ---
tags:
- generated_from_trainer
model-index:
- name: lr1e5_bs16_layer1_Bert_CNN128LSTM64NoBid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lr1e5_bs16_layer1_Bert_CNN128LSTM64NoBid
This model is a fine-tuned version of [tyavika/lr1e5_bs16_layer1_Bert_CNN128LSTM64NoBid](https://huggingface.co/tyavika/lr1e5_bs16_layer1_Bert_CNN128LSTM64NoBid) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ailabturkiye/mabelmatiz | ailabturkiye | 2023-07-16T16:00:13Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-16T15:58:53Z | ---
license: openrail
language:
- tr
tags:
- music
Mabel Matiz (500 Epoch)
|
ailabturkiye/AhmetAga | ailabturkiye | 2023-07-16T15:58:14Z | 0 | 0 | null | [
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T15:37:02Z | ---
license: openrail
language:
- tr
---
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
### Model Details
400 epoch 14 minute dataset
### Model Description
ahmet aga
- **Developed by:** Flowness"seloistaken"
- **Shared by :**Flowness"seloistaken"
- **Model type:** rvc v2
- **Language:** Turkish
### Model Sources
https://www.youtube.com/watch?v=hwDRaGwfvQI
## Uses
if you use and share this model give credits for discord.gg/ailab
|
localmodels/Orca-Mini-v2-13B-GPTQ | localmodels | 2023-07-16T15:57:59Z | 6 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"arxiv:2306.02707",
"arxiv:2302.13971",
"arxiv:2304.12244",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-16T15:57:59Z | ---
duplicated_from: localmodels/LLM
---
# Orca Mini v2 13B GPTQ
From: https://huggingface.co/psmathur/orca_mini_v2_13b
## Prompt template
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
{prompt}
### Input:
{input}
### Response:
```
---
| Model | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| orca_mini_v2_13b-GPTQ-4bit-128g.no-act.order | 4 | 128 | False | 7.45 GB | True | GPTQ-for-LLaMa | Most compatible. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. |
---
# Orca Mini v2 13B
An **Uncensored** LLaMA-13b model in collaboration with [Eric Hartford](https://huggingface.co/ehartford). trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
Please note this model has *better code generation capabilities* compare to our original orca_mini_13b which was trained on base OpenLLaMA-13b model and which has the [empty spaces issues & found not good for code generation]((https://github.com/openlm-research/open_llama#update-06072023)).
# Evaluation
I evaluated orca_mini_v2_13b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:-------------:|:---------:|
|**Task**|**Value**|**Stderr**|
|*arc_challenge*|0.5478|0.0145|
|*hellaswag*|0.7023|0.0040|
|*mmlu*|0.4969|0.035|
|*truthfulqa_mc*|0.44|0.0158|
|*Total Average*|0.54675|0.0114|
# Dataset
We used uncensored script on top of the previous explain tuned datasets we build which are [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
Please see below example usage how the **System** prompt is added before each **instruction**.
# Training
The training configurations are provided in the table below.
The training takes on 4x A100(80G) GPUs and lasts for around 21 Hours for cost of $210 (~$10 for Spot Instance) by using [Azure Standard_NC96ads_A100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/nc-a100-v4-series#supported-features).
We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [FastChat](https://github.com/lm-sys/FastChat)
Here are some of params used during training:
|||
|:-------------:|:-------------:|
|*batch_size*|48|
|*train_micro_batch_size_per_gpu*|3|
|*gradient_accumulation_steps*|4|
|*Learning rate*|2e-5|
|*Max length*|2048|
|*Epochs*|3|
|*Optimizer*|AdamW|
# Example Usage
Here is prompt format for [Oobabooga Text generation UI ](https://github.com/oobabooga/text-generation-webui)
```
### System:
{system}
### User:
{instruction}
### Input:
{input}
### Response:
```
Limitations & Biases:
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Disclaimer:
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
Please cosult an attorney before using this model for commercial purposes.
Citiation:
If you found wizardlm_alpaca_dolly_orca_open_llama_7b useful in your research or applications, please kindly cite using the following BibTeX:
```
@misc{orca_mini_v2_13b,
author = {Pankaj Mathur},
title = {orca_mini_v2_13b: An explain tuned LLaMA-13b model on uncensored wizardlm, alpaca, & dolly datasets},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v2_13b},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
```
@misc{openalpaca,
author = {Yixuan Su and Tian Lan and Deng Cai},
title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
```
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
```
@misc{xu2023wizardlm,
title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
year={2023},
eprint={2304.12244},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ailabturkiye/MehmetAliErbil | ailabturkiye | 2023-07-16T15:57:00Z | 0 | 1 | null | [
"region:us"
] | null | 2023-07-16T15:23:06Z | ---
Lisans: openrail
**Sunucu ve oyuncu Mehmet Ali Erbil'in Türkçe sesidir,
Rvc V2 500 epoch olarak eğitilmiştir.**
_Dataset ve train jawbone0 tarafından yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: jawbone0
- YouTube: JawBone0 (https://www.youtube.com/@JawBone0)

[](discord.gg/ailab)
 |
NasimB/rarity-all-guten-2p5k-cbt-p5k-mixed | NasimB | 2023-07-16T15:56:16Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-16T14:02:13Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: rarity-all-guten-2p5k-cbt-p5k-mixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rarity-all-guten-2p5k-cbt-p5k-mixed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6916 | 0.29 | 500 | 5.6242 |
| 5.3287 | 0.59 | 1000 | 5.1956 |
| 4.9961 | 0.88 | 1500 | 4.9421 |
| 4.7198 | 1.17 | 2000 | 4.8015 |
| 4.5643 | 1.47 | 2500 | 4.6835 |
| 4.4523 | 1.76 | 3000 | 4.5745 |
| 4.3273 | 2.06 | 3500 | 4.4993 |
| 4.1372 | 2.35 | 4000 | 4.4498 |
| 4.1052 | 2.64 | 4500 | 4.3880 |
| 4.0721 | 2.94 | 5000 | 4.3409 |
| 3.8586 | 3.23 | 5500 | 4.3325 |
| 3.8079 | 3.52 | 6000 | 4.3061 |
| 3.7897 | 3.82 | 6500 | 4.2690 |
| 3.678 | 4.11 | 7000 | 4.2702 |
| 3.5266 | 4.4 | 7500 | 4.2641 |
| 3.5165 | 4.7 | 8000 | 4.2488 |
| 3.5069 | 4.99 | 8500 | 4.2361 |
| 3.3367 | 5.28 | 9000 | 4.2512 |
| 3.3295 | 5.58 | 9500 | 4.2494 |
| 3.3275 | 5.87 | 10000 | 4.2480 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ailabturkiye/SimsekMcqueen | ailabturkiye | 2023-07-16T15:55:13Z | 0 | 1 | null | [
"region:us"
] | null | 2023-07-16T15:32:48Z | ---
Lisans: openrail
**Şimşek Mcqueen Türkçe seslendirmeni Yakta Kopan'ın sesidir,
Rvc V2 1000 epoch olarak eğitilmiştir.**
_Dataset ve train jawbone0 tarafından yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: jawbone0
- YouTube: JawBone0 (https://www.youtube.com/@JawBone0)

[](discord.gg/ailab)
 |
ailabturkiye/FahrettinAltun | ailabturkiye | 2023-07-16T15:54:26Z | 0 | 0 | null | [
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T15:51:36Z | ---
license: openrail
language:
- tr
tags:
- music
---
İletişim Başkanımız Sayın Fahrettin Altun. Modeli kullanarak oluşturulan hiç bir ses hakkında sorumluluk bana ait değildir.
|
helojo/wav2vec2-large-mms-1b-zh-colab | helojo | 2023-07-16T15:54:22Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_6_1",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-12T02:58:25Z | ---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
datasets:
- common_voice_6_1
model-index:
- name: wav2vec2-large-mms-1b-zh-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-zh-colab
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_6_1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.1.0.dev20230711
- Datasets 2.13.1
- Tokenizers 0.13.3
|
casque/vacuum_fellatio1.1 | casque | 2023-07-16T15:54:05Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T15:52:41Z | ---
license: creativeml-openrail-m
---
|
Mehmetakif/MinecraftZombiSesi | Mehmetakif | 2023-07-16T15:53:31Z | 0 | 0 | null | [
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T15:47:14Z | ---
license: openrail
language:
- tr
tags:
- music
--- |
casque/licking_my_dick.sd.v1.2 | casque | 2023-07-16T15:51:07Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T15:48:28Z | ---
license: creativeml-openrail-m
---
|
giocs2017/Reinforce-cartPolev1 | giocs2017 | 2023-07-16T15:49:01Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T15:48:52Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartPolev1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
0sunfire0/LunarLander-v2_new_00 | 0sunfire0 | 2023-07-16T15:48:29Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T15:48:21Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -81.95 +/- 58.67
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': '0sunfire0/LunarLander-v2_new_00'
'batch_size': 512
'minibatch_size': 128}
```
|
casque/Spooning_Posititon | casque | 2023-07-16T15:42:25Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T15:40:02Z | ---
license: creativeml-openrail-m
---
|
TokenBender/falcon-7b-chat-oasst1 | TokenBender | 2023-07-16T15:40:15Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-16T15:39:30Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
ailabturkiye/Baso | ailabturkiye | 2023-07-16T15:37:45Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-07-16T15:37:39Z | Temporary Redirect. Redirecting to /ailabturkiye/baso/resolve/main/README.md |
NasimB/all-base-no-repetition-no-cut | NasimB | 2023-07-16T15:34:04Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-16T13:44:52Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-no-repetition-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-no-repetition-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7605 | 0.31 | 500 | 5.6506 |
| 5.4119 | 0.62 | 1000 | 5.2199 |
| 5.0579 | 0.94 | 1500 | 4.9665 |
| 4.767 | 1.25 | 2000 | 4.8190 |
| 4.6276 | 1.56 | 2500 | 4.6923 |
| 4.5202 | 1.87 | 3000 | 4.5802 |
| 4.312 | 2.19 | 3500 | 4.5219 |
| 4.2135 | 2.5 | 4000 | 4.4518 |
| 4.1664 | 2.81 | 4500 | 4.3926 |
| 4.033 | 3.12 | 5000 | 4.3652 |
| 3.8843 | 3.44 | 5500 | 4.3407 |
| 3.8737 | 3.75 | 6000 | 4.3029 |
| 3.8047 | 4.06 | 6500 | 4.2883 |
| 3.5939 | 4.37 | 7000 | 4.2854 |
| 3.582 | 4.68 | 7500 | 4.2692 |
| 3.5745 | 5.0 | 8000 | 4.2540 |
| 3.3934 | 5.31 | 8500 | 4.2671 |
| 3.3874 | 5.62 | 9000 | 4.2653 |
| 3.3924 | 5.93 | 9500 | 4.2645 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
lxyuan/distilbart-finetuned-summarization | lxyuan | 2023-07-16T15:32:42Z | 159 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"distilbart",
"en",
"dataset:cnn_dailymail",
"dataset:xsum",
"dataset:samsum",
"dataset:ccdv/pubmed-summarization",
"arxiv:2010.13002",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-09T05:23:35Z | ---
tags:
- generated_from_trainer
- distilbart
model-index:
- name: distilbart-finetuned-summarization
results: []
license: apache-2.0
datasets:
- cnn_dailymail
- xsum
- samsum
- ccdv/pubmed-summarization
language:
- en
metrics:
- rouge
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-finetuned-summarization
This model is a further fine-tuned version of [distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the the combination of 4 different summarisation datasets:
- [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail)
- [samsum](https://huggingface.co/datasets/samsum)
- [xsum](https://huggingface.co/datasets/xsum)
- [ccdv/pubmed-summarization](https://huggingface.co/datasets/ccdv/pubmed-summarization)
Please check out the offical model page and paper:
- [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6)
- [Pre-trained Summarization Distillation](https://arxiv.org/abs/2010.13002)
## Training and evaluation data
One can reproduce the dataset using the following code:
```python
from datasets import DatasetDict, load_dataset
from datasets import concatenate_datasets
xsum_dataset = load_dataset("xsum")
pubmed_dataset = load_dataset("ccdv/pubmed-summarization").rename_column("article", "document").rename_column("abstract", "summary")
cnn_dataset = load_dataset("cnn_dailymail", '3.0.0').rename_column("article", "document").rename_column("highlights", "summary")
samsum_dataset = load_dataset("samsum").rename_column("dialogue", "document")
summary_train = concatenate_datasets([xsum_dataset["train"], pubmed_dataset["train"], cnn_dataset["train"], samsum_dataset["train"]])
summary_validation = concatenate_datasets([xsum_dataset["validation"], pubmed_dataset["validation"], cnn_dataset["validation"], samsum_dataset["validation"]])
summary_test = concatenate_datasets([xsum_dataset["test"], pubmed_dataset["test"], cnn_dataset["test"], samsum_dataset["test"]])
raw_datasets = DatasetDict()
raw_datasets["train"] = summary_train
raw_datasets["validation"] = summary_validation
raw_datasets["test"] = summary_test
```
## Inference example
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", model="lxyuan/distilbart-finetuned-summarization")
text = """SINGAPORE: The Singapore Police Force on Sunday (Jul 16) issued a warning over a
fake SMS impersonating as its "anti-scam centre (ASC)".
"In this scam variant, members of the public would receive a scam SMS from 'ASC',
requesting them to download and install an “anti-scam” app to ensure the security
of their devices," said the police.
"The fake SMS would direct members of the public to a URL link leading to an
Android Package Kit (APK) file, an application created for Android’s operating
system purportedly from 'ASC'."
The fake website has an icon to download the “anti-scam” app and once downloaded,
Android users are asked to allow accessibility services to enable the service.
While the fake app purportedly claims to help identify and prevent scams by
providing comprehensive protection and security, downloading it may enable
scammers to gain remote access to devices.
"Members of the public are advised not to download any suspicious APK files
on their devices as they may contain malware which will allow scammers to
access and take control of the device remotely as well as to steal passwords
stored in the device," said the police.
Members of the public are advised to adopt the following precautionary measures,
including adding anti-virus or anti-malware apps to their devices. They should
also disable “install unknown app” or “unknown sources” in their phone settings.
Users should check the developer information on the app listing as well as the
number of downloads and user reviews to ensure it is a reputable and legitimate
app, the police said.
Any fraudulent transactions should be immediately reported to the banks.
"""
pipe(text)
>>>"""The Singapore Police Force has issued a warning over a fake SMS
impersonating as its "anti-scam centre" that asks members of the public
to download an Android app to ensure the security of their devices, the
force said on Sunday. The fake SMS would direct people to a URL link
leading to an Android Package Kit (APK) file, an application created
for Android’s operating system purportedly from "ASC".
"""
```
## Training procedure
Notebook link: [here](https://github.com/LxYuan0420/nlp/blob/main/notebooks/distilbart-finetune-summarisation.ipynb)
### Training hyperparameters
The following hyperparameters were used during training:
- evaluation_strategy="epoch",
- save_strategy="epoch",
- logging_strategy="epoch",
- learning_rate=2e-5,
- per_device_train_batch_size=2,
- per_device_eval_batch_size=2,
- gradient_accumulation_steps=64,
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- weight_decay=0.01,
- save_total_limit=2,
- num_train_epochs=4,
- predict_with_generate=True,
- fp16=True,
- push_to_hub=True
### Training results
_Training is still in progress_
| Epoch | Training Loss | Validation Loss | Rouge1 | Rouge2 | RougeL | RougeLsum | Gen Len |
|-------|---------------|-----------------|--------|--------|--------|-----------|---------|
| 0 | 1.779700 | 1.719054 | 40.003900 | 17.907100 | 27.882500 | 34.888600 | 88.893600 |
| 1 | 1.633800 | 1.710876 | 40.628800 | 18.470200 | 28.428100 | 35.577500 | 88.885000 |
| 2 | 1.566100 | 1.694476 | 40.928500 | 18.695300 | 28.613300 | 35.813300 | 88.993700 |
| 3 | 1.515700 | 1.691141 | 40.860500 | 18.696500 | 28.672700 | 35.734600 | 88.457300 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ailabturkiye/CavsKarahanli | ailabturkiye | 2023-07-16T15:31:16Z | 0 | 0 | null | [
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-16T15:27:29Z | ---
license: openrail
language:
- tr
tags:
- music
---
Yayıncı Cavs Karahanli.Modeli kullanarak oluşturulan hiç bir ses hakkında sorumluluk bana ait değildir.
|
manuu01/q-FrozenLake-v1-4x4-noSlippery | manuu01 | 2023-07-16T14:57:47Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T14:57:25Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="manuu01/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Jonathaniu/alpaca-breast-cancer-13b-mix_data_2 | Jonathaniu | 2023-07-16T14:52:46Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-16T14:52:26Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.4.0.dev0
|
Saideva/title_generation | Saideva | 2023-07-16T14:38:55Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-16T14:10:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: title_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# title_generation
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 41.5236
- Rouge2: 17.5894
- Rougel: 37.2852
- Rougelsum: 37.2749
- Gen Len: 13.3542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0 | 1.0 | 3748 | nan | 41.5236 | 17.5894 | 37.2852 | 37.2749 | 13.3542 |
| 0.0 | 2.0 | 7496 | nan | 41.5236 | 17.5894 | 37.2852 | 37.2749 | 13.3542 |
| 0.0 | 3.0 | 11244 | nan | 41.5236 | 17.5894 | 37.2852 | 37.2749 | 13.3542 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
PaulineJamin/ppo-SnowballTarget | PaulineJamin | 2023-07-16T14:38:23Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-16T14:38:14Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: PaulineJamin/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
margosabry/food_classifier | margosabry | 2023-07-16T14:28:12Z | 63 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-16T13:50:22Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: margosabry/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# margosabry/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3853
- Validation Loss: 0.3150
- Train Accuracy: 0.928
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.8055 | 1.6705 | 0.808 | 0 |
| 1.2418 | 0.8233 | 0.883 | 1 |
| 0.7004 | 0.5248 | 0.912 | 2 |
| 0.5037 | 0.3802 | 0.926 | 3 |
| 0.3853 | 0.3150 | 0.928 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
huggingFacing/ddpm-butterflies-128 | huggingFacing | 2023-07-16T14:11:21Z | 0 | 0 | diffusers | [
"diffusers",
"en",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2023-07-16T14:09:03Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: /content/drive/MyDrive/image_and_text
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `/content/drive/MyDrive/image_and_text` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Tian7/ddpm-butterflies-128/tensorboard?#scalars)
|
olegs/distil-ast-audioset-finetuned-gtzan | olegs | 2023-07-16T14:09:35Z | 165 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:bookbot/distil-ast-audioset",
"base_model:finetune:bookbot/distil-ast-audioset",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-16T13:02:16Z | ---
license: apache-2.0
base_model: bookbot/distil-ast-audioset
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distil-ast-audioset-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.93
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-ast-audioset-finetuned-gtzan
This model is a fine-tuned version of [bookbot/distil-ast-audioset](https://huggingface.co/bookbot/distil-ast-audioset) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5022
- Accuracy: 0.93
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8727 | 1.0 | 113 | 0.6650 | 0.81 |
| 0.6665 | 2.0 | 226 | 0.7639 | 0.74 |
| 0.5306 | 3.0 | 339 | 0.6683 | 0.76 |
| 0.2793 | 4.0 | 452 | 0.7423 | 0.82 |
| 0.0867 | 5.0 | 565 | 0.6301 | 0.85 |
| 0.0156 | 6.0 | 678 | 0.8905 | 0.83 |
| 0.2298 | 7.0 | 791 | 0.4492 | 0.92 |
| 0.0073 | 8.0 | 904 | 0.9028 | 0.83 |
| 0.0664 | 9.0 | 1017 | 0.6387 | 0.85 |
| 0.0001 | 10.0 | 1130 | 0.5022 | 0.87 |
| 0.0001 | 11.0 | 1243 | 0.4047 | 0.91 |
| 0.0 | 12.0 | 1356 | 0.3988 | 0.92 |
| 0.0 | 13.0 | 1469 | 0.6225 | 0.91 |
| 0.0 | 14.0 | 1582 | 0.6075 | 0.86 |
| 0.0 | 15.0 | 1695 | 0.5259 | 0.89 |
| 0.0 | 16.0 | 1808 | 0.5014 | 0.92 |
| 0.0 | 17.0 | 1921 | 0.5004 | 0.93 |
| 0.0 | 18.0 | 2034 | 0.5008 | 0.93 |
| 0.0 | 19.0 | 2147 | 0.5022 | 0.93 |
| 0.0 | 20.0 | 2260 | 0.5022 | 0.93 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
lucasbertola/ppo-SnowballTarget | lucasbertola | 2023-07-16T14:08:18Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-16T14:08:12Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: lucasbertola/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
antoniodee/fin-tench | antoniodee | 2023-07-16T13:48:41Z | 0 | 0 | null | [
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-16T13:43:49Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### fin_tench Dreambooth model trained by antoniodee with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
SwampMan/a2c-AntBulletEnv-v0 | SwampMan | 2023-07-16T13:44:52Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T13:43:45Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1250.47 +/- 141.94
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/all-base-rarity-all-guten-rarity-all-2p5k-iorder-est-5p5k-mostf | NasimB | 2023-07-16T13:17:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-16T11:29:42Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-rarity-all-guten-rarity-all-2p5k-iorder-est-5p5k-mostf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-rarity-all-guten-rarity-all-2p5k-iorder-est-5p5k-mostf
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7586 | 0.32 | 500 | 5.6557 |
| 5.4132 | 0.63 | 1000 | 5.2227 |
| 5.0611 | 0.95 | 1500 | 4.9660 |
| 4.7718 | 1.26 | 2000 | 4.8202 |
| 4.64 | 1.58 | 2500 | 4.6997 |
| 4.5178 | 1.89 | 3000 | 4.5866 |
| 4.308 | 2.21 | 3500 | 4.5285 |
| 4.223 | 2.52 | 4000 | 4.4602 |
| 4.1757 | 2.84 | 4500 | 4.3982 |
| 4.0214 | 3.15 | 5000 | 4.3825 |
| 3.8976 | 3.47 | 5500 | 4.3455 |
| 3.8816 | 3.78 | 6000 | 4.3106 |
| 3.7798 | 4.1 | 6500 | 4.3020 |
| 3.6074 | 4.41 | 7000 | 4.2988 |
| 3.5991 | 4.73 | 7500 | 4.2789 |
| 3.56 | 5.04 | 8000 | 4.2749 |
| 3.408 | 5.36 | 8500 | 4.2785 |
| 3.407 | 5.67 | 9000 | 4.2775 |
| 3.401 | 5.99 | 9500 | 4.2772 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
madoe001/a2c-AntBulletEnv-v0 | madoe001 | 2023-07-16T12:58:37Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T12:56:52Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1411.03 +/- 55.48
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
WALIDALI/lyrieldiff | WALIDALI | 2023-07-16T12:55:45Z | 2 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-16T12:50:56Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### LyrielDiff Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
larry-jiang/RL | larry-jiang | 2023-07-16T12:48:55Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T12:47:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.32 +/- 20.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
UholoDala/tweet_sentiments_analysis | UholoDala | 2023-07-16T12:30:23Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-16T12:11:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tweet_sentiments_analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet_sentiments_analysis
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1411
- eval_f1-score: 0.9585
- eval_runtime: 62.6587
- eval_samples_per_second: 31.919
- eval_steps_per_second: 3.99
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Rihong/ppo-LunarLander-v2 | Rihong | 2023-07-16T12:20:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T12:19:16Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.93 +/- 18.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ALM-AHME/swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-BreastCancer-BreakHis-AH-60-20-20 | ALM-AHME | 2023-07-16T12:15:04Z | 199 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-16T09:38:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-BreastCancer-BreakHis-AH-60-20-20
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Splitted-Resized
split: train
args: Splitted-Resized
metrics:
- name: Accuracy
type: accuracy
value: 0.9943422913719944
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-BreastCancer-BreakHis-AH-60-20-20
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0229
- Accuracy: 0.9943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.5
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2053 | 1.0 | 199 | 0.1227 | 0.9496 |
| 0.1302 | 2.0 | 398 | 0.0665 | 0.9736 |
| 0.0784 | 3.0 | 597 | 0.0600 | 0.9778 |
| 0.1181 | 4.0 | 796 | 0.0449 | 0.9849 |
| 0.208 | 5.0 | 995 | 0.0393 | 0.9887 |
| 0.0057 | 6.0 | 1194 | 0.0229 | 0.9943 |
| 0.0017 | 7.0 | 1393 | 0.0263 | 0.9939 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sjdata/speecht5_finetuned_single_speaker_en_test_librivox | sjdata | 2023-07-16T12:09:19Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"en",
"dataset:speecht5_finetuned_single_speaker_en_test_librivox",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-07-13T12:31:39Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- speecht5_finetuned_single_speaker_en_test_librivox
model-index:
- name: SpeechT5 Single Speaker test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 Single Speaker test
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the single_speaker_en_test_librivox dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4809 | 1.78 | 1000 | 0.4215 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-lr-v1 | hafidikhsan | 2023-07-16T11:14:40Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-16T11:12:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-lr-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-lr-v1
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4017
- Accuracy: 0.25
- F1: 0.1
- Precision: 0.0625
- Recall: 0.25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:|
| 1.3826 | 1.0 | 500 | 1.4017 | 0.25 | 0.1 | 0.0625 | 0.25 |
| 1.4074 | 2.0 | 1000 | 1.3922 | 0.25 | 0.1 | 0.0625 | 0.25 |
| 1.3984 | 3.0 | 1500 | 1.3868 | 0.25 | 0.1 | 0.0625 | 0.25 |
| 1.387 | 4.0 | 2000 | 1.3863 | 0.25 | 0.1 | 0.0625 | 0.25 |
| 1.3861 | 5.0 | 2500 | 1.3863 | 0.25 | 0.1 | 0.0625 | 0.25 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gfs0508/AIron-Trans-PT2EN | gfs0508 | 2023-07-16T11:10:39Z | 0 | 1 | keras | [
"keras",
"translation",
"pt",
"en",
"license:mit",
"region:us"
] | translation | 2023-07-16T11:02:01Z | ---
license: mit
language:
- pt
- en
library_name: keras
pipeline_tag: translation
---
# AIron-Trans-PT2EN
## License
- MIT
## Overview
AIron-Trans-PT2EN is a Portuguese to English translation model developed using the Keras library.
## Description
AIron-Trans-PT2EN is a translation model that allows you to translate phrases and texts from Portuguese to English. It has been trained using the Long Short-Term Memory (LSTM) neural network architecture and implemented using the Keras library.
## Features
- Translation from Portuguese to English
- Model trained using the Keras library
- LSTM architecture for better contextual understanding
- Text preprocessing for improved translation quality
## Usage
You can use this translation model in your own projects by following the instructions below:
1. Install the necessary dependencies (Keras, TensorFlow, etc.).
2. Load the trained model using the `load_model()` function from Keras.
3. Preprocess input sentences using the same preprocessing steps used during training.
4. Call the `translate_sentence()` function to get the translation of the input sentence.
Code example:
```python
from tensorflow import keras
# Load the model
model = keras.models.load_model('path/to/model.h5')
# Preprocess the input sentence
preprocessed_sentence = preprocess_sentence('Olá, como vai?')
# Translate the sentence
translated_sentence = translate_sentence(preprocessed_sentence, model)
print(translated_sentence)
```
## Contribution
If you encounter any issues, have ideas for improvements, or would like to contribute to this project, feel free to open an issue or submit a pull request. We welcome contributions!
## Acknowledgments
We would like to thank all contributors who helped develop and improve this translation model.
|
sagarsdesai/ppo-Huggy | sagarsdesai | 2023-07-16T11:10:05Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-16T11:09:59Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sagarsdesai/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Beams24/nzyi | Beams24 | 2023-07-16T11:08:38Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T11:06:38Z | ---
license: creativeml-openrail-m
---
|
hyunussarioglu/ppo-Huggy | hyunussarioglu | 2023-07-16T11:04:45Z | 42 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-16T11:04:39Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hyunussarioglu/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NasimB/all-base-rarity-all-bnc-rarity-iorder-est-5p5k-mostf | NasimB | 2023-07-16T11:04:22Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-16T09:15:19Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-rarity-all-bnc-rarity-iorder-est-5p5k-mostf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-rarity-all-bnc-rarity-iorder-est-5p5k-mostf
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7751 | 0.31 | 500 | 5.6534 |
| 5.4091 | 0.63 | 1000 | 5.2238 |
| 5.0666 | 0.94 | 1500 | 4.9739 |
| 4.7773 | 1.25 | 2000 | 4.8259 |
| 4.6406 | 1.56 | 2500 | 4.7086 |
| 4.5289 | 1.88 | 3000 | 4.6001 |
| 4.3302 | 2.19 | 3500 | 4.5391 |
| 4.2295 | 2.5 | 4000 | 4.4722 |
| 4.1833 | 2.82 | 4500 | 4.4085 |
| 4.0396 | 3.13 | 5000 | 4.3880 |
| 3.9019 | 3.44 | 5500 | 4.3625 |
| 3.8912 | 3.75 | 6000 | 4.3198 |
| 3.8042 | 4.07 | 6500 | 4.3143 |
| 3.6122 | 4.38 | 7000 | 4.3069 |
| 3.6013 | 4.69 | 7500 | 4.2897 |
| 3.5881 | 5.01 | 8000 | 4.2790 |
| 3.4114 | 5.32 | 8500 | 4.2918 |
| 3.4083 | 5.63 | 9000 | 4.2889 |
| 3.4077 | 5.94 | 9500 | 4.2889 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
vlkn/falcon_instruct_6 | vlkn | 2023-07-16T10:57:41Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-07-16T10:50:11Z | ---
tags:
- generated_from_trainer
model-index:
- name: falcon_instruct_6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_instruct_6
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 30
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Kapiche/msmarco-MiniLM-L6-cos-v5 | Kapiche | 2023-07-16T10:31:33Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-02-02T22:27:10Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# msmarco-MiniLM-L6-cos-v5
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500k (query, answer) pairs from the [MS MARCO Passages dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/msmarco-MiniLM-L6-cos-v5')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take average of all tokens
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-MiniLM-L6-cos-v5")
model = AutoModel.from_pretrained("sentence-transformers/msmarco-MiniLM-L6-cos-v5")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 384 |
| Produces normalized embeddings | Yes |
| Pooling-Method | Mean pooling |
| Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance |
Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
neurae/albert-dnd-intents | neurae | 2023-07-16T09:38:16Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"en",
"dataset:neurae/dnd_style_intents",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-16T09:58:57Z | ---
datasets:
- neurae/dnd_style_intents
language:
- en
pipeline_tag: text-classification
license: apache-2.0
metrics:
- accuracy
- f1
---
This is albert base tuned with optimal lr, lr scheduler and weight decay on dnd-style-intents dataset.
| parametrs | value |
|---------------|----------|
| learning rate | 5e-5 |
| lr scheduler | linear |
| weight decay | 0 |
Model has next metrics on test data from dataset
| metric | value |
|----------|-------|
| accuracy | 0.981 |
| Macro F1 | 0.979 |
| Micro F1 | 0.985 | |
Xxmlala/q-FrozenLake-v1-4x4-noSlippery | Xxmlala | 2023-07-16T09:29:27Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T09:29:19Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Xxmlala/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/guten-rarity-all-end-19k-ctx-512-finegrained-eval | NasimB | 2023-07-16T09:07:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-16T07:08:53Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-all-end-19k-ctx-512-finegrained-eval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-all-end-19k-ctx-512-finegrained-eval
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.9362 | 0.24 | 100 | 7.3168 |
| 6.5524 | 0.48 | 200 | 6.1279 |
| 5.9236 | 0.71 | 300 | 5.7874 |
| 5.6556 | 0.95 | 400 | 5.5952 |
| 5.4733 | 1.19 | 500 | 5.4416 |
| 5.2958 | 1.43 | 600 | 5.2824 |
| 5.1307 | 1.66 | 700 | 5.1223 |
| 4.9829 | 1.9 | 800 | 4.9860 |
| 4.8024 | 2.14 | 900 | 4.8963 |
| 4.6927 | 2.38 | 1000 | 4.7992 |
| 4.6095 | 2.61 | 1100 | 4.6988 |
| 4.516 | 2.85 | 1200 | 4.6015 |
| 4.3713 | 3.09 | 1300 | 4.5147 |
| 4.2277 | 3.33 | 1400 | 4.4417 |
| 4.1862 | 3.56 | 1500 | 4.3820 |
| 4.1371 | 3.8 | 1600 | 4.3342 |
| 4.059 | 4.04 | 1700 | 4.2893 |
| 3.8884 | 4.28 | 1800 | 4.2612 |
| 3.8665 | 4.51 | 1900 | 4.2299 |
| 3.8437 | 4.75 | 2000 | 4.1981 |
| 3.815 | 4.99 | 2100 | 4.1766 |
| 3.6574 | 5.23 | 2200 | 4.1724 |
| 3.6435 | 5.46 | 2300 | 4.1629 |
| 3.6348 | 5.7 | 2400 | 4.1584 |
| 3.6424 | 5.94 | 2500 | 4.1557 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
rakaaa/tree-lora | rakaaa | 2023-07-16T09:01:55Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-16T07:30:55Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - rakaaa/tree-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dp-80-10-10 | hafidikhsan | 2023-07-16T08:55:00Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-16T08:51:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dp-80-10-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dp-80-10-10
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7905
- Accuracy: 0.816
- F1: 0.8141
- Precision: 0.8148
- Recall: 0.816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.7212 | 1.0 | 500 | 0.8608 | 0.622 | 0.6089 | 0.6237 | 0.622 |
| 0.7321 | 2.0 | 1000 | 0.7336 | 0.688 | 0.6819 | 0.6884 | 0.688 |
| 0.4413 | 3.0 | 1500 | 0.6422 | 0.774 | 0.7727 | 0.7729 | 0.774 |
| 0.3669 | 4.0 | 2000 | 1.0008 | 0.754 | 0.7521 | 0.7606 | 0.754 |
| 0.0219 | 5.0 | 2500 | 0.9872 | 0.782 | 0.7808 | 0.7801 | 0.782 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SADAF-IMAMU/train | SADAF-IMAMU | 2023-07-16T08:54:59Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-25T09:54:23Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9948
- Macro F1: 0.7856
- Precision: 0.7820
- Recall: 0.7956
- Kappa: 0.6940
- Accuracy: 0.7956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 25
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Precision | Recall | Kappa | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 101 | 1.1562 | 0.6031 | 0.5561 | 0.7044 | 0.4967 | 0.7044 |
| No log | 2.0 | 203 | 0.9119 | 0.7151 | 0.7107 | 0.7672 | 0.6236 | 0.7672 |
| No log | 3.0 | 304 | 0.8493 | 0.7280 | 0.7139 | 0.7734 | 0.6381 | 0.7734 |
| No log | 4.0 | 406 | 0.8087 | 0.7455 | 0.7632 | 0.7648 | 0.6421 | 0.7648 |
| 0.9431 | 5.0 | 507 | 0.7735 | 0.7779 | 0.7741 | 0.7931 | 0.6858 | 0.7931 |
| 0.9431 | 6.0 | 609 | 0.8201 | 0.7753 | 0.7735 | 0.7869 | 0.6797 | 0.7869 |
| 0.9431 | 7.0 | 710 | 0.8564 | 0.7886 | 0.7883 | 0.8017 | 0.7004 | 0.8017 |
| 0.9431 | 8.0 | 812 | 0.8712 | 0.7799 | 0.7754 | 0.7894 | 0.6854 | 0.7894 |
| 0.9431 | 9.0 | 913 | 0.9142 | 0.7775 | 0.7751 | 0.7869 | 0.6811 | 0.7869 |
| 0.2851 | 10.0 | 1015 | 0.9007 | 0.7820 | 0.7764 | 0.7943 | 0.6913 | 0.7943 |
| 0.2851 | 11.0 | 1116 | 0.9425 | 0.7859 | 0.7825 | 0.7956 | 0.6940 | 0.7956 |
| 0.2851 | 12.0 | 1218 | 0.9798 | 0.7815 | 0.7797 | 0.7906 | 0.6869 | 0.7906 |
| 0.2851 | 13.0 | 1319 | 0.9895 | 0.7895 | 0.7860 | 0.7993 | 0.7003 | 0.7993 |
| 0.2851 | 14.0 | 1421 | 0.9872 | 0.7854 | 0.7813 | 0.7943 | 0.6935 | 0.7943 |
| 0.1273 | 14.93 | 1515 | 0.9948 | 0.7856 | 0.7820 | 0.7956 | 0.6940 | 0.7956 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
BaleChen/REINFORCE-pixelcopter-test | BaleChen | 2023-07-16T08:51:54Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T07:58:26Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: REINFORCE-pixelcopter-test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 35.80 +/- 26.25
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
manmyung/a2c-PandaReachDense-v2 | manmyung | 2023-07-16T08:43:12Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T08:40:02Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.88 +/- 0.21
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Hans14/PPO-LunarLander-v2 | Hans14 | 2023-07-16T08:42:14Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T08:41:32Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.99 +/- 12.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
li-ping/falcon_300csv_to_sheng | li-ping | 2023-07-16T08:39:26Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-16T08:28:11Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
esculapeso/biogpt-finetuned-twspookytest | esculapeso | 2023-07-16T08:33:13Z | 130 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"biogpt",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-16T07:47:54Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: biogpt-finetuned-twspookytest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biogpt-finetuned-twspookytest
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 3.1962 |
| No log | 2.0 | 6 | 3.0132 |
| No log | 3.0 | 9 | 2.9607 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
openlm-research/open_llama_3b_v2 | openlm-research | 2023-07-16T08:32:00Z | 25,360 | 149 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-16T00:39:43Z | ---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
---
# OpenLLaMA: An Open Reproduction of LLaMA
**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
## v2 models
model_path = 'openlm-research/open_llama_3b_v2'
# model_path = 'openlm-research/open_llama_7b_v2'
## v1 models
# model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
# model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
## Dataset and Training
The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism [](https://engineering.fb.com/2021/07/15/open-source/fsdp/)(also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 3Bv2 | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
| ---------------------- | -------- | -------- | --------- | -------------- | -------------- | ------------ | ------------ | ------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.34 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.39 | 0.35 | 0.38 | 0.40 |
| arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.34 | 0.39 | 0.34 | 0.37 | 0.41 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.36 | 0.41 | 0.37 | 0.38 | 0.44 |
| arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.68 | 0.73 | 0.69 | 0.72 | 0.75 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.63 | 0.70 | 0.65 | 0.68 | 0.70 |
| boolq/acc | 0.66 | 0.75 | 0.71 | 0.66 | 0.72 | 0.68 | 0.71 | 0.75 |
| hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.52 | 0.56 | 0.49 | 0.53 | 0.56 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.70 | 0.75 | 0.67 | 0.72 | 0.76 |
| openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.26 | 0.30 | 0.27 | 0.30 | 0.31 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.38 | 0.41 | 0.40 | 0.40 | 0.43 |
| piqa/acc | 0.75 | 0.78 | 0.79 | 0.77 | 0.79 | 0.75 | 0.76 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.78 | 0.80 | 0.76 | 0.77 | 0.79 |
| record/em | 0.88 | 0.91 | 0.92 | 0.87 | 0.89 | 0.88 | 0.89 | 0.91 |
| record/f1 | 0.89 | 0.91 | 0.92 | 0.88 | 0.89 | 0.89 | 0.90 | 0.91 |
| rte/acc | 0.54 | 0.56 | 0.69 | 0.55 | 0.57 | 0.58 | 0.60 | 0.64 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.22 | 0.23 | 0.22 | 0.23 | 0.25 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.35 | 0.38 |
| wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 |
| winogrande/acc | 0.64 | 0.68 | 0.70 | 0.63 | 0.66 | 0.62 | 0.67 | 0.70 |
| Average | 0.52 | 0.55 | 0.57 | 0.53 | 0.56 | 0.53 | 0.55 | 0.57 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
watcharakorn/whisper-small-th-v2 | watcharakorn | 2023-07-16T08:24:22Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"th-asr-leaderboard",
"generated_from_trainer",
"th",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-16T08:21:55Z | ---
language:
- th
license: apache-2.0
base_model: openai/whisper-small
tags:
- th-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small th - mix dataset v.2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: th
split: test
args: 'config: th, split: test'
metrics:
- name: Wer
type: wer
value: 0.37791454289122656
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small th - mix dataset v.2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2980
- Wer: 0.3779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3654 | 0.26 | 1000 | 0.2980 | 0.3779 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digiplay/polla_mix_2.4D | digiplay | 2023-07-16T08:23:45Z | 334 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-16T06:56:58Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/110130?modelVersionId=118734
Simple image I made thru Huggingface's API :

prompt :
> pink spider with pink heart symbol
***Original Author's DEMO images :***
,%20blonde_hair,%20commentary_request,%20fate_prototype,%20fate_(series),%20green_eyes,%20hood,%20male_foc.jpeg)


|
Ricky1981/Lgmx | Ricky1981 | 2023-07-16T08:19:20Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T07:33:57Z | ---
license: creativeml-openrail-m
---
|
gpcarl123/resnet18_mnist | gpcarl123 | 2023-07-16T08:16:35Z | 0 | 0 | timm | [
"timm",
"en",
"dataset:mnist",
"model-index",
"region:us"
] | null | 2023-07-16T07:48:41Z | ---
language:
- en
library_name: timm
datasets:
- mnist
metrics:
- accuracy
model-index:
- name: resnet18_mnist
results:
- task:
type: image-classification
dataset:
name: MNIST
type: mnist
metrics:
- type: accuracy
value: 0.9936
---
# Usage
```python
import timm
import torchvision
MNIST_PATH = './datasets/mnist'
net = timm.create_model("resnet18", pretrained=False, num_classes=10)
net.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False)
net.load_state_dict(
torch.hub.load_state_dict_from_url(
"https://huggingface.co/gpcarl123/resnet18_mnist/resolve/main/resnet18_mnist.pth",
map_location="cpu",
file_name="resnet18_mnist.pth",
)
)
preprocessor = torchvision.transforms.Normalize((0.1307,), (0.3081,))
transform = transforms.Compose([transforms.ToTensor()])
test_set = datasets.MNIST(root=MNIST_PATH, train=False, download=True, transform=transform)
test_loader = data.DataLoader(test_set, batch_size=5, shuffle=False, num_workers=2)
for data, target in test_loader:
print(net(preprocessor(data)))
print(target)
break
``` |
digiplay/polla_mix_2.5D | digiplay | 2023-07-16T07:56:07Z | 50 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-16T06:57:17Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/110130?modelVersionId=118741
Sample image I made thru Huggingface's API :

Original Author's DEMO images :


|
shihab17/bengali-bn-to-en | shihab17 | 2023-07-16T07:51:36Z | 25 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"bn",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-05-23T10:43:30Z | ---
library_name: transformers
pipeline_tag: translation
language:
- bn
---
### How to use
You can use this model directly with a pipeline:
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("shihab17/bengali-bn-to-en")
model = AutoModelForSeq2SeqLM.from_pretrained("shihab17/bengali-bn-to-en")
sentence = 'ম্যাচ শেষে পুরস্কার বিতরণের মঞ্চে তামিমের মুখে মোস্তাফিজের প্রশংসা শোনা গেল'
translator = pipeline("translation_bn_to_en", model=model, tokenizer=tokenizer)
output = translator(sentence)
print(output)
``` |
murakami-dev/distilbert-base-uncased-finetuned-emotion | murakami-dev | 2023-07-16T07:47:19Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-16T07:36:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.922976796795522
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2127
- Accuracy: 0.923
- F1: 0.9230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7809 | 1.0 | 250 | 0.3023 | 0.9045 | 0.9005 |
| 0.2412 | 2.0 | 500 | 0.2127 | 0.923 | 0.9230 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digiplay/zodiac_eclipse_DAY1 | digiplay | 2023-07-16T07:40:46Z | 285 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-14T08:32:01Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/108417/zodiac-eclipse-day1
Sample image I made thru Huggingface's API :
```
dog eat mango icecream
```

Original Author's DEMO images :
),%20((masterpiece)),%20(detailed),%20alluring%20succubus,%20ethereal%20beauty,%20perched%20on%20a%20cloud,%20(fantasy%20illustration_1.3.jpeg)
)),.jpeg)
|
tuanhnh/Reinforce-0 | tuanhnh | 2023-07-16T07:30:47Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T07:30:40Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 453.60 +/- 109.04
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
li-ping/falcon_0csv_to_sheng | li-ping | 2023-07-16T07:12:27Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-16T06:56:45Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
laserchalk/kangaroo-training-part-10 | laserchalk | 2023-07-16T06:53:40Z | 6 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-16T06:39:24Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### kangaroo-training-part-10 Dreambooth model trained by laserchalk with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
NasimB/guten-rarity-all-cut-20k | NasimB | 2023-07-16T06:43:28Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-16T04:47:07Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-all-cut-20k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-all-cut-20k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6961 | 0.29 | 500 | 5.6410 |
| 5.3325 | 0.58 | 1000 | 5.1967 |
| 4.9884 | 0.88 | 1500 | 4.9500 |
| 4.7101 | 1.17 | 2000 | 4.8016 |
| 4.5563 | 1.46 | 2500 | 4.6746 |
| 4.446 | 1.75 | 3000 | 4.5705 |
| 4.3244 | 2.05 | 3500 | 4.4929 |
| 4.1291 | 2.34 | 4000 | 4.4489 |
| 4.0992 | 2.63 | 4500 | 4.3891 |
| 4.0577 | 2.92 | 5000 | 4.3368 |
| 3.8593 | 3.21 | 5500 | 4.3329 |
| 3.8077 | 3.51 | 6000 | 4.3001 |
| 3.778 | 3.8 | 6500 | 4.2669 |
| 3.6848 | 4.09 | 7000 | 4.2684 |
| 3.513 | 4.38 | 7500 | 4.2630 |
| 3.5142 | 4.68 | 8000 | 4.2467 |
| 3.4975 | 4.97 | 8500 | 4.2338 |
| 3.3389 | 5.26 | 9000 | 4.2463 |
| 3.3207 | 5.55 | 9500 | 4.2462 |
| 3.3201 | 5.84 | 10000 | 4.2453 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
bochen0909/ppo-LunarLander-v2 | bochen0909 | 2023-07-16T06:41:23Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T06:41:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.97 +/- 20.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BaleChen/REINFORCE-cartpolev1-test | BaleChen | 2023-07-16T06:39:08Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T06:38:57Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: REINFORCE-cartpolev1-test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 456.90 +/- 89.34
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
NasimB/bnc-rarity-no-cut-shuffled | NasimB | 2023-07-16T06:24:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-16T04:27:02Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bnc-rarity-no-cut-shuffled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bnc-rarity-no-cut-shuffled
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7157 | 0.29 | 500 | 5.6437 |
| 5.3513 | 0.58 | 1000 | 5.2021 |
| 5.0016 | 0.88 | 1500 | 4.9595 |
| 4.7286 | 1.17 | 2000 | 4.8122 |
| 4.5693 | 1.46 | 2500 | 4.6857 |
| 4.4647 | 1.75 | 3000 | 4.5770 |
| 4.3308 | 2.05 | 3500 | 4.5068 |
| 4.1402 | 2.34 | 4000 | 4.4574 |
| 4.1123 | 2.63 | 4500 | 4.3983 |
| 4.0711 | 2.92 | 5000 | 4.3468 |
| 3.8657 | 3.22 | 5500 | 4.3414 |
| 3.8086 | 3.51 | 6000 | 4.3099 |
| 3.7977 | 3.8 | 6500 | 4.2728 |
| 3.6947 | 4.09 | 7000 | 4.2729 |
| 3.5188 | 4.39 | 7500 | 4.2684 |
| 3.5211 | 4.68 | 8000 | 4.2523 |
| 3.5159 | 4.97 | 8500 | 4.2387 |
| 3.3414 | 5.26 | 9000 | 4.2532 |
| 3.3357 | 5.56 | 9500 | 4.2520 |
| 3.328 | 5.85 | 10000 | 4.2517 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
lovelyxs/rl_course_vizdoom_health_gathering_supreme | lovelyxs | 2023-07-16T05:56:49Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T05:56:44Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.28 +/- 4.85
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r lovelyxs/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ardhies/vira | ardhies | 2023-07-16T05:48:55Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-23T00:46:30Z | ---
license: creativeml-openrail-m
---
|
Vasanth/distilbert-stock-tweet-sentiment-analysis | Vasanth | 2023-07-16T05:26:06Z | 185 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-16T05:15:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-stock-tweet-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-stock-tweet-sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6075
- Accuracy: 0.782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.686 | 1.0 | 1000 | 0.5916 | 0.7745 |
| 0.4804 | 2.0 | 2000 | 0.5635 | 0.7812 |
| 0.3644 | 3.0 | 3000 | 0.6075 | 0.782 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ltmai/morgan-embed-bio-clinical-bert-ddi | ltmai | 2023-07-16T05:24:59Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-07-15T18:38:02Z | ---
tags:
- generated_from_trainer
model-index:
- name: morgan-embed-bio-clinical-bert-ddi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# morgan-embed-bio-clinical-bert-ddi
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000628
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dp-60-20-20 | hafidikhsan | 2023-07-16T05:24:57Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-16T05:22:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dp-60-20-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dp-60-20-20
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9015
- Accuracy: 0.743
- F1: 0.7432
- Precision: 0.7495
- Recall: 0.743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8727 | 1.0 | 375 | 0.8777 | 0.574 | 0.5558 | 0.5611 | 0.574 |
| 0.834 | 2.0 | 750 | 0.8086 | 0.652 | 0.6470 | 0.6558 | 0.652 |
| 0.6609 | 3.0 | 1125 | 0.8289 | 0.695 | 0.6926 | 0.6945 | 0.695 |
| 0.329 | 4.0 | 1500 | 0.9585 | 0.755 | 0.7558 | 0.7607 | 0.755 |
| 0.1628 | 5.0 | 1875 | 1.1191 | 0.751 | 0.7488 | 0.7479 | 0.751 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Denilah/distilbert-base-uncased-finetuned-emotion | Denilah | 2023-07-16T05:15:46Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-16T03:24:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.937
- name: F1
type: f1
value: 0.9373121473490384
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1565
- Accuracy: 0.937
- F1: 0.9373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4774 | 1.0 | 1000 | 0.1971 | 0.923 | 0.9226 |
| 0.147 | 2.0 | 2000 | 0.1565 | 0.937 | 0.9373 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
j-hyeok/taxi-v3 | j-hyeok | 2023-07-16T04:27:10Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T04:27:06Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="j-hyeok/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
j-hyeok/q-FrozenLake-v1-4x4-noSlippery | j-hyeok | 2023-07-16T04:22:13Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-16T04:22:07Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="j-hyeok/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SerCe/tortoise-tts-ruslan | SerCe | 2023-07-16T04:21:51Z | 0 | 9 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-06-07T10:12:18Z | ---
license: apache-2.0
---
The _tortoise-tts-ruslan_ model is a tortoise model capable of speaking Russian language.
The model was first finetuned on public_youtube700_val+buriy_audiobooks_2_val from [Russian Open Speech To Text](https://learn.microsoft.com/en-us/azure/open-datasets/dataset-open-speech-text?tabs=azure-storage).
The model was then finetuned on [RUSLAN: Russian Spoken Language Corpus For Speech Synthesis](https://ruslan-corpus.github.io/).
The model is able to generate generic male voices, see [examples](https://huggingface.co/SerCe/tortoise-tts-ruslan/tree/main/examples/random).
Additionally, the model is suitable for further finetuning on any Russian male voice, e.g. see a [finetuned](https://huggingface.co/SerCe/tortoise-tts-ruslan/tree/main/examples/finetuned) voice of Yury Dud (the finetuned model weights are not included).
The code from [ai-voice-cloning](https://git.ecker.tech/mrq/ai-voice-cloning) was used to train the model. |
NasimB/children-rarity-all-guten-log-rarity-all | NasimB | 2023-07-16T04:21:14Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-16T02:19:49Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: children-rarity-all-guten-log-rarity-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# children-rarity-all-guten-log-rarity-all
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7036 | 0.29 | 500 | 5.6365 |
| 5.348 | 0.58 | 1000 | 5.2064 |
| 4.99 | 0.87 | 1500 | 4.9589 |
| 4.7208 | 1.16 | 2000 | 4.8071 |
| 4.5602 | 1.46 | 2500 | 4.6761 |
| 4.4513 | 1.75 | 3000 | 4.5690 |
| 4.3332 | 2.04 | 3500 | 4.4907 |
| 4.1308 | 2.33 | 4000 | 4.4479 |
| 4.1002 | 2.62 | 4500 | 4.3912 |
| 4.0711 | 2.91 | 5000 | 4.3370 |
| 3.8621 | 3.2 | 5500 | 4.3334 |
| 3.803 | 3.49 | 6000 | 4.3002 |
| 3.7865 | 3.79 | 6500 | 4.2683 |
| 3.6992 | 4.08 | 7000 | 4.2633 |
| 3.5158 | 4.37 | 7500 | 4.2591 |
| 3.5163 | 4.66 | 8000 | 4.2433 |
| 3.501 | 4.95 | 8500 | 4.2300 |
| 3.3525 | 5.24 | 9000 | 4.2437 |
| 3.3213 | 5.53 | 9500 | 4.2424 |
| 3.3235 | 5.82 | 10000 | 4.2416 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
laserchalk/kangaroo-training-part-7 | laserchalk | 2023-07-16T04:15:03Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-16T04:04:01Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Kangaroo-training-part-7 Dreambooth model trained by laserchalk with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
1112lee/setfit-model | 1112lee | 2023-07-16T03:42:36Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-07-16T03:28:00Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# 1112lee/setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("1112lee/setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Evan-Lin/Bart-RL-many-keywordmax-entailment-attractive-reward5 | Evan-Lin | 2023-07-16T03:40:20Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2023-07-15T19:51:18Z | ---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpi3mfbi5q/Evan-Lin/Bart-RL-many-keywordmax-entailment-attractive-reward5")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpi3mfbi5q/Evan-Lin/Bart-RL-many-keywordmax-entailment-attractive-reward5")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpi3mfbi5q/Evan-Lin/Bart-RL-many-keywordmax-entailment-attractive-reward5")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
OptimalScale/robin-13b-v2-delta | OptimalScale | 2023-07-16T03:14:08Z | 1,546 | 7 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.12420",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-28T05:55:54Z | ---
inference: false
---
# Robin Model Card
## Model Details
Robin is a series of models finetuned from LLaMA on several high-quality data.
- **Developed by:** [LMFlow](https://github.com/OptimalScale/LMFlow/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/OptimalScale/LMFlow/
- **Blog:** https://medium.com/@hkust.ml/robin-v2-launches-achieves-unparalleled-performance-on-openllm-4f6886e822c1
- **Paper:** https://arxiv.org/abs/2306.12420
- **Demo:** https://lmflow.com/
## Uses
Robin is primarily utilized for conducting research on extensive language models and chatbots, catering to users specializing in natural language processing, machine learning, and artificial intelligence research.
## How to Get Started with the Model
We provide four kinds of demos including:
- Online Service: If you don't want to run any code and just want to try our models, we deploy our instruction-tuned LLaMA you to have a try.
- Colab Chatbot (shell): An interactive shell-based chatbot for you to easily deploy a chatbot on colab.
- Colab Chatbot (web): An interactive web-based chatbot for you to easily deploy your own chatbot on colab.
- Local Deploy: We also provide a way for you to deploy your model/chatbot locally, which means you can deploy much larger model than previous three methods if you have enough resource.
Please refer to https://github.com/OptimalScale/LMFlow#demos
## Training Details
Expanding upon the initial idea of self-instruct techniques, we incorporated several different data sources and build a new dataset called [LMFlow Dataset](http://lmflow.org:5000/lmflow_data.tar.gz).
The new training split is created by merging the following datasets:
- ShareGPT: randomly sample 50K English data and 10K Chinese data from ShareGPT.
- GPT-4-LLM: 52K English data from GPT-4-LLM.
- BELLE: randomly sample 80K Chinese data from BELLE.
See more details in the "Instruction Tuning" section in our [paper](https://arxiv.org/pdf/2306.12420.pdf).
## Evaluation
Robin is evaluated with [LMFlow Benchmark](https://blog.gopenai.com/lmflow-benchmark-an-automatic-evaluation-framework-for-open-source-llms-ef5c6f142418).
See more details in this [paper](https://arxiv.org/pdf/2306.12420.pdf).
## Citation
If you find this repository useful, please consider giving ⭐ and citing our [paper](https://arxiv.org/abs/2306.12420):
```
@misc{lmflow,
author = {Shizhe Diao and Rui Pan and Hanze Dong and KaShun Shum and Jipeng Zhang and Wei Xiong and Tong Zhang},
title = {LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://optimalscale.github.io/LMFlow/}},
}
``` |
ALM-AHME/convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20 | ALM-AHME | 2023-07-16T03:13:16Z | 12 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"convnextv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-15T00:35:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Splitted-Resized
split: train
args: Splitted-Resized
metrics:
- name: Accuracy
type: accuracy
value: 0.9900990099009901
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
This model is a fine-tuned version of [facebook/convnextv2-large-1k-224](https://huggingface.co/facebook/convnextv2-large-1k-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0353
- Accuracy: 0.9901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5207 | 1.0 | 199 | 0.4745 | 0.8887 |
| 0.2029 | 2.0 | 398 | 0.2072 | 0.9401 |
| 0.1615 | 3.0 | 597 | 0.1489 | 0.9547 |
| 0.1662 | 4.0 | 796 | 0.1312 | 0.9562 |
| 0.1986 | 5.0 | 995 | 0.1026 | 0.9698 |
| 0.0854 | 6.0 | 1194 | 0.0583 | 0.9802 |
| 0.0538 | 7.0 | 1393 | 0.0568 | 0.9835 |
| 0.0977 | 8.0 | 1592 | 0.0654 | 0.9793 |
| 0.6971 | 9.0 | 1791 | 0.6821 | 0.5450 |
| 0.211 | 10.0 | 1990 | 0.1654 | 0.9326 |
| 0.1775 | 11.0 | 2189 | 0.0859 | 0.9665 |
| 0.0042 | 12.0 | 2388 | 0.0353 | 0.9901 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
GarbageCollector/EFX2 | GarbageCollector | 2023-07-16T03:07:37Z | 0 | 0 | null | [
"stable-diffusion",
"safetensors",
"text-to-image",
"license:unknown",
"region:us"
] | text-to-image | 2023-07-16T02:27:12Z | ---
tags:
- stable-diffusion
- safetensors
pipeline_tag: text-to-image
license: unknown
---
<p>this place is my garbage collection.<br>
some models are not better than others.</p>
<p>___SAMPLES___</p>
<p>LOOMER<br>
<img src="https://huggingface.co/GarbageCollector/EFX2/resolve/main/samples/LOOMER.jpg"/>
</p> |
OptimalScale/robin-65b-v2-delta | OptimalScale | 2023-07-16T02:48:33Z | 1,534 | 12 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.12420",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-11T06:48:38Z | ---
inference: false
---
# Robin Model Card
## Model Details
Robin is a series of models finetuned from LLaMA on several high-quality data.
- **Developed by:** [LMFlow](https://github.com/OptimalScale/LMFlow/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/OptimalScale/LMFlow/
- **Blog:** https://medium.com/@hkust.ml/robin-v2-launches-achieves-unparalleled-performance-on-openllm-4f6886e822c1
- **Paper:** https://arxiv.org/abs/2306.12420
- **Demo:** https://lmflow.com/
## Uses
Robin is primarily utilized for conducting research on extensive language models and chatbots, catering to users specializing in natural language processing, machine learning, and artificial intelligence research.
## How to Get Started with the Model
We provide four kinds of demos including:
- Online Service: If you don't want to run any code and just want to try our models, we deploy our instruction-tuned LLaMA you to have a try.
- Colab Chatbot (shell): An interactive shell-based chatbot for you to easily deploy a chatbot on colab.
- Colab Chatbot (web): An interactive web-based chatbot for you to easily deploy your own chatbot on colab.
- Local Deploy: We also provide a way for you to deploy your model/chatbot locally, which means you can deploy much larger model than previous three methods if you have enough resource.
Please refer to https://github.com/OptimalScale/LMFlow#demos
## Training Details
Expanding upon the initial idea of self-instruct techniques, we incorporated several different data sources and build a new dataset called [LMFlow Dataset](http://lmflow.org:5000/lmflow_data.tar.gz).
The new training split is created by merging the following datasets:
- ShareGPT: randomly sample 50K English data and 10K Chinese data from ShareGPT.
- GPT-4-LLM: 52K English data from GPT-4-LLM.
- BELLE: randomly sample 80K Chinese data from BELLE.
See more details in the "Instruction Tuning" section in our [paper](https://arxiv.org/pdf/2306.12420.pdf).
## Evaluation
Robin is evaluated with [LMFlow Benchmark](https://blog.gopenai.com/lmflow-benchmark-an-automatic-evaluation-framework-for-open-source-llms-ef5c6f142418).
See more details in this [paper](https://arxiv.org/pdf/2306.12420.pdf).
## Citation
If you find this repository useful, please consider giving ⭐ and citing our [paper](https://arxiv.org/abs/2306.12420):
```
@misc{lmflow,
author = {Shizhe Diao and Rui Pan and Hanze Dong and KaShun Shum and Jipeng Zhang and Wei Xiong and Tong Zhang},
title = {LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://optimalscale.github.io/LMFlow/}},
}
``` |
PeterBrendan/pbjsGPT2v2 | PeterBrendan | 2023-07-16T02:32:02Z | 144 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-12T15:07:20Z | ---
license: mit
widget:
- text: bidderTimeout
- text: Usebidcache
- text: bidderSequence
- text: customPriceBucket
---
## Model: GPT-2
### Model name: pbjsGPT2v2
### Model description:
This fine-tuned version of the GPT-2 model was trained on a subset of 1100+ publisher domains' Prebid config files. Its focus is on sophisticated Prebid publishers. The model provides insights into how these publishers configure their Prebid settings. By inputting a Prebid config setting, such as ***bidderTimeout***, the model generates sample Prebid configuration settings based on the collected data. It aims to assist publishers in understanding different configurations used by sophisticated publishers.
### Intended uses:
This model is intended to assist publishers in understanding and exploring how other publishers configure their Prebid settings. It serves as a reference for gaining insights into common configurations, best practices, and different approaches used by top publishers across various domains.
### Limitations:
The generated Prebid configuration settings are based on the data from the training set and may not cover all possible configurations or reflect the specific requirements of a particular domain. Publishers should carefully review and adapt the generated configurations to their specific needs and business rules.
### How to use:
To use this model, provide a Prebid config setting, such as ***bidderSequence***. The model will generate a sample Prebid configuration related to that input based on the collected data.
### Training data:
This model was trained on a subset of 1100+ publisher domains Prebid config files. The dataset was collected from a variety of publishers and represents a wide range of Prebid settings used in the industry.
### Training procedure:
The model was fine-tuned using the GPT-2 base model with the aforementioned dataset.
### Evaluation results:
The evaluation of this model focuses on its ability to generate coherent and valid Prebid configuration settings based on the provided Prebid config setting. Human evaluators reviewed the generated configurations for relevance and accuracy.
### Safety and bias considerations:
The model is trained on data from actual Prebid config files and aims to provide accurate insights into publishers' configurations. However, it's important to note that biases may exist in the original data itself, as the training data is based on real-world configurations. Users should review and validate the generated configurations to ensure they align with their specific requirements and guidelines.
Users are encouraged to exercise caution and use their expertise in interpreting and adapting the generated Prebid configurations for their own use. The model should be seen as a helpful tool to gain inspiration and understanding of common Prebid settings but not as a substitute for thorough testing and manual review of the final configurations. |
PeterBrendan/pbjs_gpt2 | PeterBrendan | 2023-07-16T02:14:23Z | 144 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T02:03:58Z | ---
license: mit
widget:
- text: bidderTimeout
- text: Usebidcache
- text: bidderSequence
- text: customPriceBucket
---
**Model:** GPT-2
**Model name:** pbjs_gpt2
**Model description:** This fine-tuned version of the GPT-2 model was trained on a dataset of 1100+ publisher domains' Prebid config files. It aims to provide insights into how other publishers configure their Prebid settings. Given a Prebid config setting, such as ***bidderTimeout***, the model can generate sample Prebid configuration settings based on the collected data. It helps publishers gain an understanding of how different publishers configure their Prebid settings.
**Intended uses:** This model is intended to assist publishers in understanding and exploring how other publishers configure their Prebid settings. It serves as a reference to gain insights into common configurations, best practices, and different approaches used by publishers across various domains.
**Limitations:** It's important to note that the generated Prebid configuration settings are based on the data from the training set and may not cover all possible configurations or reflect the specific requirements of a particular domain. Publishers should carefully review and adapt the generated configurations to their specific needs and business rules.
**How to use:** To use this model, provide a Prebid config setting, such as ***bidderSequence***. The model will generate a sample Prebid configuration related to that input based on the collected data.
**Training data:** This model was trained on a dataset consisting of over 1100+ publisher domains Prebid config files. The dataset was collected from a variety of publishers and represents a wide range of Prebid settings used in the industry.
**Training procedure:** The model was fine-tuned using the GPT-2 base model with the aforementioned dataset. The training loss achieved was 0.43277667846199475.
**Evaluation results:** The evaluation of this model focuses on its ability to generate coherent and valid Prebid configuration settings based on the provided Prebid config setting. Human evaluators reviewed the generated configurations for relevance and accuracy.
**Safety and bias considerations:** The model is trained on data from actual Prebid config files and aims to provide accurate insights into publishers' configurations. However, it's important to note that biases may exist in the original data itself, as the training data is based on real-world configurations. Users should review and validate the generated configurations to ensure they align with their specific requirements and guidelines.
Users are encouraged to exercise caution and use their expertise in interpreting and adapting the generated Prebid configurations for their own use. The model should be seen as a helpful tool to gain inspiration and understanding of common Prebid settings but not as a substitute for thorough testing and manual review of the final configurations.
|
monideep2255/spell_correction_M04_V3 | monideep2255 | 2023-07-16T02:10:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-16T00:59:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: spell_correction_M04_V3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spell_correction_M04_V3
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 269 | 0.2687 |
| 1.8467 | 2.0 | 538 | 0.0361 |
| 1.8467 | 3.0 | 807 | 0.0241 |
| 0.0357 | 4.0 | 1076 | 0.0198 |
| 0.0357 | 5.0 | 1345 | 0.0199 |
| 0.0159 | 6.0 | 1614 | 0.0175 |
| 0.0159 | 7.0 | 1883 | 0.0179 |
| 0.0077 | 8.0 | 2152 | 0.0189 |
| 0.0077 | 9.0 | 2421 | 0.0183 |
| 0.006 | 10.0 | 2690 | 0.0183 |
| 0.006 | 11.0 | 2959 | 0.0191 |
| 0.0044 | 12.0 | 3228 | 0.0186 |
| 0.0044 | 13.0 | 3497 | 0.0192 |
| 0.0033 | 14.0 | 3766 | 0.0189 |
| 0.0024 | 15.0 | 4035 | 0.0173 |
| 0.0024 | 16.0 | 4304 | 0.0171 |
| 0.0026 | 17.0 | 4573 | 0.0183 |
| 0.0026 | 18.0 | 4842 | 0.0181 |
| 0.0021 | 19.0 | 5111 | 0.0177 |
| 0.0021 | 20.0 | 5380 | 0.0174 |
| 0.0015 | 21.0 | 5649 | 0.0173 |
| 0.0015 | 22.0 | 5918 | 0.0174 |
| 0.0016 | 23.0 | 6187 | 0.0178 |
| 0.0016 | 24.0 | 6456 | 0.0180 |
| 0.0018 | 25.0 | 6725 | 0.0175 |
| 0.0018 | 26.0 | 6994 | 0.0171 |
| 0.0017 | 27.0 | 7263 | 0.0175 |
| 0.0014 | 28.0 | 7532 | 0.0177 |
| 0.0014 | 29.0 | 7801 | 0.0178 |
| 0.0013 | 30.0 | 8070 | 0.0178 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.12.1+cu102
- Datasets 2.13.1
- Tokenizers 0.13.3
|
manmyung/ppo-SnowballTarget | manmyung | 2023-07-16T02:08:22Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-16T02:08:19Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: manmyung/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Hex820000/anime_v10 | Hex820000 | 2023-07-16T01:57:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-16T01:46:09Z | ---
license: creativeml-openrail-m
---
|
Subsets and Splits