modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-22 12:28:33
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-22 12:28:03
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sd-dreambooth-library/EpicMixVirtualRealismv6 | sd-dreambooth-library | 2023-07-15T08:14:40Z | 134 | 4 | diffusers | [
"diffusers",
"realism",
"stable diffusion",
"epicmix",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-04-15T08:40:05Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- realism
- stable diffusion
- epicmix
---
This is the Realism you've been PROBABLY not waiting for, but is getting anyways.
This is the branch of V3 and contains NONE OF V4 and Pastel. And none of V5
The only negative embeds used were contained in Nocrypt's notebook. Beyond that none were used.
We're moving this permanatley to SD Dreambooth Library, and absolve any ownership of it.
It's no longer on CivitAI, and details on what was created to make this are below:
# MIX BUCKET
<details>
<summary>THE BUCKET OF JOY</summary>
Epicv3 + Noise Offset
Babes 11 (NO VAE)
Cake Mix
Epic Portrait + Retro (two trained models i think of ours)
Plus Lucious Mix
</details>
|
YojitShinde/Reinforce-PixelCopter-v0 | YojitShinde | 2023-07-15T08:05:02Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T08:04:57Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.70 +/- 38.34
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nolanaatama/phtn | nolanaatama | 2023-07-15T08:04:27Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T07:58:19Z | ---
license: creativeml-openrail-m
---
|
Serjssv/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan | Serjssv | 2023-07-15T07:48:17Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-14T13:11:04Z | ---
license: bsd-3-clause
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3273
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5056 | 1.0 | 112 | 0.5669 | 0.85 |
| 0.2324 | 2.0 | 225 | 0.5131 | 0.85 |
| 0.2623 | 3.0 | 337 | 0.6539 | 0.79 |
| 0.4419 | 4.0 | 450 | 0.7401 | 0.83 |
| 0.0177 | 5.0 | 562 | 0.5134 | 0.85 |
| 0.0026 | 6.0 | 675 | 0.3351 | 0.9 |
| 0.0046 | 7.0 | 787 | 0.5120 | 0.88 |
| 0.0005 | 8.0 | 900 | 0.5165 | 0.91 |
| 0.2003 | 9.0 | 1012 | 0.3453 | 0.91 |
| 0.0001 | 10.0 | 1125 | 0.3438 | 0.91 |
| 0.0003 | 11.0 | 1237 | 0.3324 | 0.92 |
| 0.0 | 12.0 | 1350 | 0.3999 | 0.89 |
| 0.0 | 13.0 | 1462 | 0.3152 | 0.91 |
| 0.0001 | 14.0 | 1575 | 0.3212 | 0.92 |
| 0.0 | 15.0 | 1687 | 0.3220 | 0.92 |
| 0.0 | 16.0 | 1800 | 0.3343 | 0.9 |
| 0.0 | 17.0 | 1912 | 0.3324 | 0.91 |
| 0.0 | 18.0 | 2025 | 0.3311 | 0.91 |
| 0.0 | 19.0 | 2137 | 0.3292 | 0.91 |
| 0.0 | 19.91 | 2240 | 0.3273 | 0.91 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lilBuffaloEric/autoaudit_20230714_attempt2 | lilBuffaloEric | 2023-07-15T07:17:32Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-15T07:02:08Z | This model is a finetuned version run by the finetune.py in github repository tolen/alpaca-lora with the following parameters, notice that the training dataset can be found in repository:https://github.com/ddzipp/AutoAudit_LLM_Dataset
# model/data params
base_model: str = "yahma/llama-7b-hf",
data_path: str = "", # dataset see repository https://github.com/ddzipp/AutoAudit_LLM_Dataset/tree/v0.0.1
output_dir: str = "./autoaudit_20230703_attempt2",
# training hyperparams
batch_size: int = 4,
micro_batch_size: int = 1,
num_epochs: int = 28,
learning_rate: float = 3e-4,
cutoff_len: int = 512,
val_set_size: int = 400,
# lora hyperparams
lora_r: int = 16,
lora_alpha: int = 16,
lora_dropout: float = 0.05,
lora_target_modules: List[str] = [
"q_proj",
"k_proj",
"v_proj",
"o_proj"
],
# llm hyperparams
train_on_inputs: bool = True, # if False, masks out inputs in loss
add_eos_token: bool = False,
group_by_length: bool = False, # faster, but produces an odd training loss curve |
lilBuffaloEric/autoaudit_20230703_attempt1 | lilBuffaloEric | 2023-07-15T07:10:37Z | 0 | 4 | null | [
"region:us"
] | null | 2023-07-15T06:31:54Z | This model is a finetuned version run by the finetune.py in github repository tolen/alpaca-lora with the following parameters, notice that the training dataset can be found in repository:https://github.com/ddzipp/AutoAudit_LLM_Dataset
# model/data params
base_model: str = "yahma/llama-7b-hf",
data_path: str = "", # dataset see repository https://github.com/ddzipp/AutoAudit_LLM_Dataset/tree/v0.0.1
output_dir: str = "./autoaudit_20230703_attempt1",
# training hyperparams
batch_size: int = 4,
micro_batch_size: int = 1,
num_epochs: int = 14,
learning_rate: float = 3e-4,
cutoff_len: int = 512,
val_set_size: int = 400,
# lora hyperparams
lora_r: int = 16,
lora_alpha: int = 16,
lora_dropout: float = 0.05,
lora_target_modules: List[str] = [
"q_proj",
"k_proj",
"v_proj",
"o_proj"
],
# llm hyperparams
train_on_inputs: bool = True, # if False, masks out inputs in loss
add_eos_token: bool = False,
group_by_length: bool = False, # faster, but produces an odd training loss curve |
blackmount8/mpt-7b-instruct-ct2-int8_float16 | blackmount8 | 2023-07-15T06:52:02Z | 2 | 0 | transformers | [
"transformers",
"Composer",
"MosaicML",
"llm-foundry",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"region:us"
] | null | 2023-07-15T05:40:47Z | ---
inference: false
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- Composer
- MosaicML
- llm-foundry
---
# blackmount8/mpt-7b-instruct-ct2-int8_float16
Int8_float16 version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct), quantized using CTranslate2.
## MPT-7B-Instruct
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted in the dolly-15k format:
```python
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering."
fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-40GBs for about 2.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
galaxywavee/personaluse | galaxywavee | 2023-07-15T06:50:48Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-04-18T03:00:57Z | ---
license: bigscience-openrail-m
---
majicMIX realistic >>>
推荐关键词 recommended positive prompts: Best quality, masterpiece, ultra high res, (photorealistic:1.4), 1girl
如果想要更暗的图像 if you want darker picture, add: in the dark, deep shadow, low key, etc.
负面关键词 use ng_deepnegative_v1_75t and badhandv4 in negative prompt
Sampler: DPM++ 2M Karras (bug-fixed) or DPM++ SDE Karras
Steps: 20~40
Hires upscaler: R-ESRGAN 4x+ or 4x-UltraSharp
Hires upscale: 2
Hires steps: 15
Denoising strength: 0.2~0.5
CFG scale: 6-8
clip skip 2
Aerial (Animation and img2img) >>> Trigger Words : aerialstyle
|
zen-E/q-Taxi-v3-v1 | zen-E | 2023-07-15T06:36:09Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T06:35:41Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.64
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="zen-E/q-Taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
NasimB/guten-rarity-all-end-19k-ctx-512 | NasimB | 2023-07-15T06:32:42Z | 143 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-15T05:38:01Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-all-end-19k-ctx-512
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-all-end-19k-ctx-512
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5135 | 1.19 | 500 | 5.4526 |
| 4.9916 | 2.38 | 1000 | 4.8062 |
| 4.3998 | 3.56 | 1500 | 4.4088 |
| 3.9739 | 4.75 | 2000 | 4.2180 |
| 3.6922 | 5.94 | 2500 | 4.1726 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Evan-Lin/Bart-RL-many-keywordmax-entailment-attractive-reward1 | Evan-Lin | 2023-07-15T05:39:14Z | 49 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2023-07-14T17:58:29Z | ---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmp71nhx1t_/Evan-Lin/Bart-RL-many-keywordmax-entailment-attractive-beam10")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmp71nhx1t_/Evan-Lin/Bart-RL-many-keywordmax-entailment-attractive-beam10")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmp71nhx1t_/Evan-Lin/Bart-RL-many-keywordmax-entailment-attractive-beam10")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
sgarg/falcon-7b-qlora-fiqa-finbot-v1 | sgarg | 2023-07-15T05:30:56Z | 4 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-15T04:43:18Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
kelvinih/taser-bert-base-uncased | kelvinih | 2023-07-15T05:29:51Z | 0 | 0 | null | [
"pytorch",
"license:mit",
"region:us"
] | null | 2023-07-15T05:27:05Z | ---
license: mit
---
# Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering
This repository includes the model for
[Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering](https://aclanthology.org/2023.acl-short.159/).
If you find this useful, please cite the following paper:
```
@inproceedings{cheng-etal-2023-task,
title = "Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering",
author = "Cheng, Hao and
Fang, Hao and
Liu, Xiaodong and
Gao, Jianfeng",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-short.159",
pages = "1864--1875",
}
```
|
digiplay/Opiate_v2 | digiplay | 2023-07-15T05:07:02Z | 333 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-15T04:16:25Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/69587?modelVersionId=98101
Original Author's DEMO images :




|
NasimB/guten-rarity-end-cut-19k | NasimB | 2023-07-15T04:56:56Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-15T03:03:02Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-end-cut-19k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-end-cut-19k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.69 | 0.29 | 500 | 5.6412 |
| 5.3327 | 0.59 | 1000 | 5.2058 |
| 4.9884 | 0.88 | 1500 | 4.9570 |
| 4.7105 | 1.18 | 2000 | 4.8008 |
| 4.5563 | 1.47 | 2500 | 4.6777 |
| 4.4438 | 1.77 | 3000 | 4.5652 |
| 4.3057 | 2.06 | 3500 | 4.4916 |
| 4.1258 | 2.36 | 4000 | 4.4456 |
| 4.1001 | 2.65 | 4500 | 4.3854 |
| 4.0586 | 2.94 | 5000 | 4.3319 |
| 3.8297 | 3.24 | 5500 | 4.3249 |
| 3.8029 | 3.53 | 6000 | 4.2962 |
| 3.7812 | 3.83 | 6500 | 4.2655 |
| 3.6544 | 4.12 | 7000 | 4.2687 |
| 3.5166 | 4.42 | 7500 | 4.2598 |
| 3.4969 | 4.71 | 8000 | 4.2438 |
| 3.4978 | 5.01 | 8500 | 4.2328 |
| 3.3159 | 5.3 | 9000 | 4.2445 |
| 3.3203 | 5.59 | 9500 | 4.2434 |
| 3.3104 | 5.89 | 10000 | 4.2422 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
goethe0101/GWP_Model | goethe0101 | 2023-07-15T04:46:28Z | 1 | 0 | peft | [
"peft",
"pytorch",
"gpt_neox",
"region:us"
] | null | 2023-07-08T01:59:57Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
digiplay/Opiate_v1 | digiplay | 2023-07-15T04:39:12Z | 272 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-15T04:15:32Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/69587?modelVersionId=81796
Original Author's DEMO images :


|
ZidanSink/Kayess | ZidanSink | 2023-07-15T04:35:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-29T07:27:11Z | ---
license: creativeml-openrail-m
---
|
Wiryan/imryan | Wiryan | 2023-07-15T04:27:51Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T04:22:48Z | ---
license: creativeml-openrail-m
---
|
manmyung/Reinforce-CartPole-v1 | manmyung | 2023-07-15T04:24:11Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T04:23:52Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 490.20 +/- 23.02
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
RoundtTble/dinov2_vitl14_onnx | RoundtTble | 2023-07-15T04:16:19Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2023-07-02T02:18:01Z | # dinov2_vitl14_onnx
## Run Triton
```
make triton
```
```
=============================
== Triton Inference Server ==
=============================
NVIDIA Release 23.04 (build 58408265)
Triton Server Version 2.33.0
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
NOTE: CUDA Forward Compatibility mode ENABLED.
Using CUDA 12.1 driver version 530.30.02 with kernel driver version 525.125.06.
See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.
I0715 04:13:59.173070 1 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f1a70000000' with size 268435456
I0715 04:13:59.173293 1 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0715 04:13:59.175108 1 model_lifecycle.cc:459] loading: dinov2_vitl14:1
I0715 04:13:59.177471 1 onnxruntime.cc:2504] TRITONBACKEND_Initialize: onnxruntime
I0715 04:13:59.177510 1 onnxruntime.cc:2514] Triton TRITONBACKEND API version: 1.12
I0715 04:13:59.177518 1 onnxruntime.cc:2520] 'onnxruntime' TRITONBACKEND API version: 1.12
I0715 04:13:59.177525 1 onnxruntime.cc:2550] backend configuration:
{"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}}
I0715 04:13:59.233419 1 onnxruntime.cc:2608] TRITONBACKEND_ModelInitialize: dinov2_vitl14 (version 1)
I0715 04:13:59.233847 1 onnxruntime.cc:666] skipping model configuration auto-complete for 'dinov2_vitl14': inputs and outputs already specified
I0715 04:13:59.234233 1 onnxruntime.cc:2651] TRITONBACKEND_ModelInstanceInitialize: dinov2_vitl14_0 (GPU device 0)
2023-07-15 04:13:59.546824126 [W:onnxruntime:, session_state.cc:1136 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2023-07-15 04:13:59.546847104 [W:onnxruntime:, session_state.cc:1138 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
I0715 04:14:00.851748 1 model_lifecycle.cc:694] successfully loaded 'dinov2_vitl14' version 1
I0715 04:14:00.851859 1 server.cc:583]
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+
I0715 04:14:00.851944 1 server.cc:610]
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Backend | Path | Config |
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so | {"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}} |
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0715 04:14:00.852005 1 server.cc:653]
+---------------+---------+--------+
| Model | Version | Status |
+---------------+---------+--------+
| dinov2_vitl14 | 1 | READY |
+---------------+---------+--------+
I0715 04:14:00.872645 1 metrics.cc:808] Collecting metrics for GPU 0: NVIDIA RTX A4000
I0715 04:14:00.873026 1 metrics.cc:701] Collecting CPU metrics
I0715 04:14:00.873315 1 tritonserver.cc:2387]
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option | Value |
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.33.0 |
| server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data parameters statistics trace logging |
| model_repository_path[0] | /models |
| model_control_mode | MODE_NONE |
| strict_model_config | 0 |
| rate_limit | OFF |
| pinned_memory_pool_byte_size | 268435456 |
| cuda_memory_pool_byte_size{0} | 67108864 |
| min_supported_compute_capability | 6.0 |
| strict_readiness | 1 |
| exit_timeout | 30 |
| cache_enabled | 0 |
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0715 04:14:00.875498 1 grpc_server.cc:2450] Started GRPCInferenceService at 0.0.0.0:8001
I0715 04:14:00.875964 1 http_server.cc:3555] Started HTTPService at 0.0.0.0:8000
I0715 04:14:00.917871 1 http_server.cc:185] Started Metrics Service at 0.0.0.0:8002
```
## Perf Analyzer
```
docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:23.04-py3-sdk perf_analyzer -m dinov2_vitl14 --percentile=95 -i grpc -u 0.0.0.0:8001 --concurrency-range 16:16 --shape input:3,560,560
=================================
== Triton Inference Server SDK ==
=================================
NVIDIA Release 23.04 (build 58408269)
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
NOTE: CUDA Forward Compatibility mode ENABLED.
Using CUDA 12.1 driver version 530.30.02 with kernel driver version 525.125.06.
See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.
*** Measurement Settings ***
Batch size: 1
Service Kind: Triton
Using "time_windows" mode for stabilization
Measurement window: 5000 msec
Latency limit: 0 msec
Concurrency limit: 16 concurrent requests
Using synchronous calls for inference
Stabilizing using p95 latency
Request concurrency: 16
Client:
Request count: 881
Throughput: 48.927 infer/sec
p50 latency: 324015 usec
p90 latency: 330275 usec
p95 latency: 331952 usec
p99 latency: 336638 usec
Avg gRPC time: 323066 usec ((un)marshal request/response 953 usec + response wait 322113 usec)
Server:
Inference count: 881
Execution count: 111
Successful request count: 881
Avg request latency: 313673 usec (overhead 7065 usec + queue 151785 usec + compute input 7582 usec + compute infer 143162 usec + compute output 4077 usec)
Inferences/Second vs. Client p95 Batch Latency
Concurrency: 16, throughput: 48.927 infer/sec, latency 331952 usec
```
|
mittalashish/chique7 | mittalashish | 2023-07-15T04:11:30Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-15T04:08:44Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: <Chique>
---
### chique7 Dreambooth model trained by mittalashish with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-512 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
<Chique> (use that on your prompt)

|
NasimB/gpt2-concat-rarity-guten-bnc-no-cut | NasimB | 2023-07-15T04:07:59Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-15T02:14:18Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-rarity-guten-bnc-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-rarity-guten-bnc-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7063 | 0.29 | 500 | 5.6381 |
| 5.354 | 0.59 | 1000 | 5.2164 |
| 5.0098 | 0.88 | 1500 | 4.9588 |
| 4.7339 | 1.17 | 2000 | 4.8190 |
| 4.5764 | 1.46 | 2500 | 4.6923 |
| 4.4686 | 1.76 | 3000 | 4.5840 |
| 4.3402 | 2.05 | 3500 | 4.5086 |
| 4.152 | 2.34 | 4000 | 4.4605 |
| 4.1177 | 2.63 | 4500 | 4.4050 |
| 4.0811 | 2.93 | 5000 | 4.3506 |
| 3.8727 | 3.22 | 5500 | 4.3480 |
| 3.819 | 3.51 | 6000 | 4.3120 |
| 3.8077 | 3.8 | 6500 | 4.2812 |
| 3.698 | 4.1 | 7000 | 4.2842 |
| 3.5395 | 4.39 | 7500 | 4.2768 |
| 3.5285 | 4.68 | 8000 | 4.2603 |
| 3.5155 | 4.97 | 8500 | 4.2472 |
| 3.3564 | 5.27 | 9000 | 4.2620 |
| 3.3394 | 5.56 | 9500 | 4.2607 |
| 3.3378 | 5.85 | 10000 | 4.2600 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
jerryjalapeno/nart-100k-7b | jerryjalapeno | 2023-07-15T03:57:11Z | 1,520 | 20 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-14T19:01:46Z | ---
license: cc-by-nc-nd-4.0
---
|
renatostrianese/q-FrozenLake-v1-4x4-noSlippery | renatostrianese | 2023-07-15T03:43:44Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T03:43:33Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="renatostrianese/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
crumb/opentinystories-68m-complex | crumb | 2023-07-15T03:25:24Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"dataset:crumb/flan-ul2-tinystories-complex",
"dataset:crumb/flan-ul2-tinystories",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T09:16:03Z | ---
datasets:
- crumb/flan-ul2-tinystories-complex
- crumb/flan-ul2-tinystories
---
test loss 2.669290 on crumb/flan-ul2-tinystories-complex, initialized from crumb/opentinystories-30m-base, 2 epochs, linear decreasing lr 1e-4. trained with double the batch size (256) |
NasimB/gpt2-concat-switch-rarity-no-cut | NasimB | 2023-07-15T02:38:57Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-15T00:47:27Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-switch-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-switch-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7037 | 0.29 | 500 | 5.6319 |
| 5.3373 | 0.58 | 1000 | 5.2001 |
| 4.9919 | 0.87 | 1500 | 4.9536 |
| 4.7185 | 1.17 | 2000 | 4.8020 |
| 4.5556 | 1.46 | 2500 | 4.6811 |
| 4.4476 | 1.75 | 3000 | 4.5737 |
| 4.3298 | 2.04 | 3500 | 4.4863 |
| 4.1272 | 2.33 | 4000 | 4.4421 |
| 4.0996 | 2.62 | 4500 | 4.3853 |
| 4.0564 | 2.91 | 5000 | 4.3350 |
| 3.8676 | 3.21 | 5500 | 4.3248 |
| 3.8015 | 3.5 | 6000 | 4.2945 |
| 3.7787 | 3.79 | 6500 | 4.2610 |
| 3.6894 | 4.08 | 7000 | 4.2563 |
| 3.5111 | 4.37 | 7500 | 4.2530 |
| 3.5076 | 4.66 | 8000 | 4.2365 |
| 3.4984 | 4.95 | 8500 | 4.2243 |
| 3.341 | 5.24 | 9000 | 4.2363 |
| 3.3189 | 5.54 | 9500 | 4.2358 |
| 3.3196 | 5.83 | 10000 | 4.2346 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
RajanGo/RajanGo-Asgn-2 | RajanGo | 2023-07-15T01:43:52Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-15T01:43:44Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
timjwhite/a2c-AntBulletEnv-v0 | timjwhite | 2023-07-15T01:39:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T01:37:29Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 792.36 +/- 37.50
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
chandrasutrisnotjhong/taxi | chandrasutrisnotjhong | 2023-07-15T01:06:47Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T01:06:45Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="chandrasutrisnotjhong/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
borkur/gpt2-finetuned-wikitext2 | borkur | 2023-07-15T00:56:29Z | 85 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-14T21:30:03Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: borkur/gpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# borkur/gpt2-finetuned-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.4948
- Validation Loss: 6.3466
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.3152 | 6.7681 | 0 |
| 6.4948 | 6.3466 | 1 |
### Framework versions
- Transformers 4.31.0.dev0
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
giocs2017/dqn-SpaceInvadersNoFrameskip-v4-gio | giocs2017 | 2023-07-15T00:25:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T00:25:23Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 595.00 +/- 126.25
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga giocs2017 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga giocs2017 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga giocs2017
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ALM-AHME/beit-large-patch16-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20 | ALM-AHME | 2023-07-14T23:55:06Z | 5 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-14T20:43:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-large-patch16-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Splitted-Resized
split: train
args: Splitted-Resized
metrics:
- name: Accuracy
type: accuracy
value: 0.9938708156529938
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-large-patch16-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0275
- Accuracy: 0.9939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.46 | 1.0 | 199 | 0.3950 | 0.8482 |
| 0.2048 | 2.0 | 398 | 0.1886 | 0.9189 |
| 0.182 | 3.0 | 597 | 0.1382 | 0.9481 |
| 0.0826 | 4.0 | 796 | 0.0760 | 0.9694 |
| 0.0886 | 5.0 | 995 | 0.0600 | 0.9788 |
| 0.0896 | 6.0 | 1194 | 0.0523 | 0.9802 |
| 0.0774 | 7.0 | 1393 | 0.0482 | 0.9826 |
| 0.0876 | 8.0 | 1592 | 0.0289 | 0.9877 |
| 0.1105 | 9.0 | 1791 | 0.0580 | 0.9821 |
| 0.0289 | 10.0 | 1990 | 0.0294 | 0.9925 |
| 0.0594 | 11.0 | 2189 | 0.0331 | 0.9906 |
| 0.0011 | 12.0 | 2388 | 0.0275 | 0.9939 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
silvacarl/distilbert-base-uncased-finetuned-cola | silvacarl | 2023-07-14T23:45:58Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-14T22:37:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.527141964318474
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8042
- Matthews Correlation: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5199 | 1.0 | 535 | 0.5170 | 0.4218 |
| 0.3502 | 2.0 | 1070 | 0.5057 | 0.4959 |
| 0.2419 | 3.0 | 1605 | 0.6179 | 0.5164 |
| 0.1818 | 4.0 | 2140 | 0.7569 | 0.5209 |
| 0.1328 | 5.0 | 2675 | 0.8042 | 0.5271 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
CheeriosMomentors/LORA | CheeriosMomentors | 2023-07-14T23:32:58Z | 0 | 0 | null | [
"en",
"license:wtfpl",
"region:us"
] | null | 2023-04-08T06:21:46Z | ---
license: wtfpl
language:
- en
---
Okay listen up. This is mostly loras that I made by myself.
Some of these may be released on Civitai and some may not.
If you found these, good job you now have cool loras.
You can post these on Civitai or anywhere idc.
You can say these are yours, get money I do not care.
But please for god sake, leave my name out of it.
I am not responsible for anything you done with these.
These were just for fun, that is all. Now enjoy.
Lora Count: 2
We currently have Nisho Ishin (Medaka Box) style and ryukishi07 (Umineko Style.)
I may make more and post them here. |
Yntec/Photosphere | Yntec | 2023-07-14T23:22:58Z | 1,547 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Noosphere",
"Dreamlike",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-14T22:54:19Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Noosphere
- Dreamlike
---
# Photosphere
A mix of Noosphere v3 by skumerz and photorealistic models.
Original page:
https://civitai.com/models/36538?modelVersionId=107675 |
MnLgt/slope-bed | MnLgt | 2023-07-14T23:19:56Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2023-07-14T23:19:55Z | ---
license: mit
---
### slope-bed on Stable Diffusion
This is the `<slope-bed>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:













|
cgr28/q-Taxi-v3 | cgr28 | 2023-07-14T23:15:40Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T23:15:38Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cgr28/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
0sunfire0/Pixelcopter_train_00 | 0sunfire0 | 2023-07-14T23:10:07Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T23:10:05Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter_train_00
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 7.20 +/- 7.10
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ashnrk/textual_inversion_annual_crop_te | ashnrk | 2023-07-14T23:05:57Z | 31 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-14T22:58:31Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a centered satellite photo of <annual-crop> annual crop land.
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - ashnrk/textual_inversion_annual_crop_te
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a centered satellite photo of <annual-crop> annual crop land. using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
ddanshin/clip-roberta-finetuned | ddanshin | 2023-07-14T22:45:45Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"vision-text-dual-encoder",
"feature-extraction",
"generated_from_trainer",
"dataset:ydshieh/coco_dataset_script",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-07-14T00:04:05Z | ---
base_model: ./clip-roberta
tags:
- generated_from_trainer
datasets:
- ydshieh/coco_dataset_script
model-index:
- name: clip-roberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-finetuned
This model is a fine-tuned version of [./clip-roberta](https://huggingface.co/./clip-roberta) on the ydshieh/coco_dataset_script 2017 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YanJiangJerry/sentiment-roberta-e3-b16-v2-w0.01 | YanJiangJerry | 2023-07-14T22:45:22Z | 121 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-14T13:20:25Z | ---
tags:
- generated_from_trainer
metrics:
- f1
- recall
- precision
model-index:
- name: sentiment-roberta-e3-b16-v2-w0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-roberta-e3-b16-v2-w0.01
This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6014
- F1: 0.7844
- Recall: 0.7844
- Precision: 0.7844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:---------:|
| No log | 1.0 | 187 | 0.6687 | 0.7574 | 0.7574 | 0.7574 |
| No log | 2.0 | 374 | 0.5700 | 0.7898 | 0.7898 | 0.7898 |
| 0.6052 | 3.0 | 561 | 0.6014 | 0.7844 | 0.7844 | 0.7844 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
underactuated/opt-350m_ft | underactuated | 2023-07-14T22:41:50Z | 136 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-14T22:39:39Z | ---
tags:
- generated_from_trainer
model-index:
- name: opt-350m_ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m_ft
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YanJiangJerry/sentiment-roberta-e2-b16-v2-w0.01 | YanJiangJerry | 2023-07-14T22:29:12Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-14T22:22:40Z | ---
tags:
- generated_from_trainer
metrics:
- f1
- recall
- precision
model-index:
- name: sentiment-roberta-e2-b16-v2-w0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-roberta-e2-b16-v2-w0.01
This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8630
- F1: 0.7520
- Recall: 0.7520
- Precision: 0.7520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:---------:|
| No log | 1.0 | 375 | 0.8651 | 0.6739 | 0.6739 | 0.6739 |
| 0.6564 | 2.0 | 750 | 0.8630 | 0.7520 | 0.7520 | 0.7520 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
marloz03/my_awesome_qa_model | marloz03 | 2023-07-14T22:26:40Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-13T21:07:04Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: marloz03/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# marloz03/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2264
- Validation Loss: 1.4529
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6044 | 1.5880 | 0 |
| 1.3853 | 1.4529 | 1 |
| 1.2264 | 1.4529 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.10.0
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Recognai/zeroshot_selectra_small | Recognai | 2023-07-14T22:23:19Z | 129 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"electra",
"text-classification",
"zero-shot-classification",
"nli",
"es",
"dataset:xnli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2022-03-02T23:29:04Z | ---
language: es
tags:
- zero-shot-classification
- nli
- pytorch
datasets:
- xnli
pipeline_tag: zero-shot-classification
license: apache-2.0
widget:
- text: "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo"
candidate_labels: "cultura, sociedad, economia, salud, deportes"
---
# Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA
*Zero-shot SELECTRA* is a [SELECTRA model](https://huggingface.co/Recognai/selectra_small) fine-tuned on the Spanish portion of the [XNLI dataset](https://huggingface.co/datasets/xnli). You can use it with Hugging Face's [Zero-shot pipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline) to make [zero-shot classifications](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
In comparison to our previous zero-shot classifier [based on BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli), zero-shot SELECTRA is **much more lightweight**. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) **outperforms** the BETO based zero-shot classifier.
## Usage
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Recognai/zeroshot_selectra_medium")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Este ejemplo es {}."
)
"""Output
{'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo',
'labels': ['sociedad', 'cultura', 'salud', 'economia', 'deportes'],
'scores': [0.3711881935596466,
0.25650349259376526,
0.17355826497077942,
0.1641489565372467,
0.03460107371211052]}
"""
```
The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
## Metrics
| Model | Params | XNLI (acc) | \*MLSUM (acc) |
| --- | --- | --- | --- |
| [zs BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli) | 110M | 0.799 | 0.530 |
| [zs SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium) | 41M | **0.807** | **0.589** |
| zs SELECTRA small | **22M** | 0.795 | 0.446 |
\*evaluated with zero-shot learning (ZSL)
- **XNLI**: The stated accuracy refers to the test portion of the [XNLI dataset](https://huggingface.co/datasets/xnli), after finetuning the model on the training portion.
- **MLSUM**: For this accuracy we take the test set of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) and classify the summaries of 5 selected labels. For details, check out our [evaluation notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/evaluation.ipynb)
## Training
Check out our [training notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/training.ipynb) for all the details.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon))
- Javier Lopez ([GitHub](https://github.com/javispp)) |
cuervjos/alpacaIOD-7b-plus | cuervjos | 2023-07-14T22:22:53Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-13T08:58:46Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
Recognai/zeroshot_selectra_medium | Recognai | 2023-07-14T22:21:07Z | 795 | 10 | transformers | [
"transformers",
"pytorch",
"safetensors",
"electra",
"text-classification",
"zero-shot-classification",
"nli",
"es",
"dataset:xnli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2022-03-02T23:29:04Z | ---
language: es
tags:
- zero-shot-classification
- nli
- pytorch
datasets:
- xnli
pipeline_tag: zero-shot-classification
license: apache-2.0
widget:
- text: "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo"
candidate_labels: "cultura, sociedad, economia, salud, deportes"
---
# Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA
*Zero-shot SELECTRA* is a [SELECTRA model](https://huggingface.co/Recognai/selectra_small) fine-tuned on the Spanish portion of the [XNLI dataset](https://huggingface.co/datasets/xnli). You can use it with Hugging Face's [Zero-shot pipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline) to make [zero-shot classifications](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
In comparison to our previous zero-shot classifier [based on BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli), zero-shot SELECTRA is **much more lightweight**. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) **outperforms** the BETO based zero-shot classifier.
## Usage
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Recognai/zeroshot_selectra_medium")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Este ejemplo es {}."
)
"""Output
{'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo',
'labels': ['sociedad', 'cultura', 'economia', 'salud', 'deportes'],
'scores': [0.6450043320655823,
0.16710571944713593,
0.08507631719112396,
0.0759836807847023,
0.026829993352293968]}
"""
```
The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
## Demo and tutorial
If you want to see this model in action, we have created a basic tutorial using [Rubrix](https://www.rubrix.ml/), a free and open-source tool to *explore, annotate, and monitor data for NLP*.
The tutorial shows you how to evaluate this classifier for news categorization in Spanish, and how it could be used to build a training set for training a supervised classifier (which might be useful if you want obtain more precise results or improve the model over time).
You can [find the tutorial here](https://rubrix.readthedocs.io/en/master/tutorials/zeroshot_data_annotation.html).
See the video below showing the predictions within the annotation process (see that the predictions are almost correct for every example).
<video width="100%" controls><source src="https://github.com/recognai/rubrix-materials/raw/main/tutorials/videos/zeroshot_selectra_news_data_annotation.mp4" type="video/mp4"></video>
## Metrics
| Model | Params | XNLI (acc) | \*MLSUM (acc) |
| --- | --- | --- | --- |
| [zs BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli) | 110M | 0.799 | 0.530 |
| zs SELECTRA medium | 41M | **0.807** | **0.589** |
| [zs SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small) | **22M** | 0.795 | 0.446 |
\*evaluated with zero-shot learning (ZSL)
- **XNLI**: The stated accuracy refers to the test portion of the [XNLI dataset](https://huggingface.co/datasets/xnli), after finetuning the model on the training portion.
- **MLSUM**: For this accuracy we take the test set of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) and classify the summaries of 5 selected labels. For details, check out our [evaluation notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/evaluation.ipynb)
## Training
Check out our [training notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/training.ipynb) for all the details.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon))
- Javier Lopez ([GitHub](https://github.com/javispp)) |
ammag/bert-finetuned-squad | ammag | 2023-07-14T22:17:38Z | 127 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-07T20:37:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Jowie/ppo-LunarLander | Jowie | 2023-07-14T22:08:23Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T22:07:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 227.31 +/- 46.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ashnrk/textual_inversion_annual_crop | ashnrk | 2023-07-14T22:07:25Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-10T21:25:08Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a centered satellite photo of <annual-crop> annual crop land.
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - ashnrk/textual_inversion_annual_crop
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a centered satellite photo of <annual-crop> annual crop land. using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
brucew5978/my_awesome_asr_mind_model | brucew5978 | 2023-07-14T22:02:12Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-12T18:24:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: my_awesome_asr_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_asr_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 57.1369
- Wer: 1.1053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 48.7151 | 200.0 | 1000 | 57.1369 | 1.1053 |
| 47.4068 | 400.0 | 2000 | 57.1369 | 1.1053 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AACEE/pokemon-lora | AACEE | 2023-07-14T21:57:11Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-14T20:24:26Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - AACEE/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
surprisal-optimizer/dqn-SpaceInvadersNoFrameskip-v4 | surprisal-optimizer | 2023-07-14T21:55:15Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T21:54:41Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 364.00 +/- 173.79
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga surprisal-optimizer -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga surprisal-optimizer -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga surprisal-optimizer
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1200),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
wolffenbuetell/PFKODRCHORMA | wolffenbuetell | 2023-07-14T21:53:52Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-14T21:48:13Z | ---
license: creativeml-openrail-m
---
|
0sunfire0/Cartpole-v1_train_01 | 0sunfire0 | 2023-07-14T21:31:24Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T21:31:15Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v1_train_01
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 497.20 +/- 8.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
NasimB/gpt2-concat-qed-rarity-no-cut | NasimB | 2023-07-14T21:16:05Z | 140 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-14T19:12:29Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-qed-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-qed-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7002 | 0.29 | 500 | 5.6309 |
| 5.3451 | 0.58 | 1000 | 5.2082 |
| 5.0021 | 0.88 | 1500 | 4.9592 |
| 4.7266 | 1.17 | 2000 | 4.8110 |
| 4.5737 | 1.46 | 2500 | 4.6859 |
| 4.4727 | 1.75 | 3000 | 4.5796 |
| 4.3511 | 2.04 | 3500 | 4.5066 |
| 4.1544 | 2.34 | 4000 | 4.4568 |
| 4.1252 | 2.63 | 4500 | 4.3988 |
| 4.083 | 2.92 | 5000 | 4.3471 |
| 3.8825 | 3.21 | 5500 | 4.3454 |
| 3.8226 | 3.5 | 6000 | 4.3139 |
| 3.8118 | 3.8 | 6500 | 4.2766 |
| 3.7159 | 4.09 | 7000 | 4.2763 |
| 3.5383 | 4.38 | 7500 | 4.2702 |
| 3.5395 | 4.67 | 8000 | 4.2556 |
| 3.5257 | 4.96 | 8500 | 4.2454 |
| 3.3727 | 5.26 | 9000 | 4.2570 |
| 3.3469 | 5.55 | 9500 | 4.2567 |
| 3.3465 | 5.84 | 10000 | 4.2550 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dylanalloy/bert-finetuned-ner | dylanalloy | 2023-07-14T21:09:48Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-14T19:41:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9337180544105523
- name: Recall
type: recall
value: 0.9530461124200605
- name: F1
type: f1
value: 0.9432830848671608
- name: Accuracy
type: accuracy
value: 0.9872843939483135
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0575
- Precision: 0.9337
- Recall: 0.9530
- F1: 0.9433
- Accuracy: 0.9873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0868 | 1.0 | 1756 | 0.0651 | 0.9158 | 0.9371 | 0.9263 | 0.9828 |
| 0.0351 | 2.0 | 3512 | 0.0635 | 0.9286 | 0.9493 | 0.9388 | 0.9864 |
| 0.0182 | 3.0 | 5268 | 0.0575 | 0.9337 | 0.9530 | 0.9433 | 0.9873 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Vladislav-HuggingFace/dqn-SpaceInvadersNoFrameskip-v4 | Vladislav-HuggingFace | 2023-07-14T20:52:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T20:52:04Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 654.50 +/- 195.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Vladislav-HuggingFace -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Vladislav-HuggingFace -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Vladislav-HuggingFace
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
YanJiangJerry/covid-augment-tweet-bert-large-e8-noweight | YanJiangJerry | 2023-07-14T20:48:41Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-14T20:18:03Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: covid-augment-tweet-bert-large-e8-noweight
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-augment-tweet-bert-large-e8-noweight
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2396
- Accuracy: 0.9714
- F1: 0.9249
- Precision: 0.9095
- Recall: 0.9409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 408 | 0.1663 | 0.9419 | 0.8609 | 0.78 | 0.9606 |
| 0.2202 | 2.0 | 816 | 0.1532 | 0.9594 | 0.8957 | 0.8630 | 0.9310 |
| 0.0794 | 3.0 | 1224 | 0.1745 | 0.9687 | 0.9167 | 0.9122 | 0.9212 |
| 0.0318 | 4.0 | 1632 | 0.1815 | 0.9696 | 0.9197 | 0.9087 | 0.9310 |
| 0.0098 | 5.0 | 2040 | 0.2013 | 0.9705 | 0.9227 | 0.9052 | 0.9409 |
| 0.0098 | 6.0 | 2448 | 0.2173 | 0.9733 | 0.9294 | 0.9183 | 0.9409 |
| 0.0031 | 7.0 | 2856 | 0.2324 | 0.9696 | 0.9189 | 0.9167 | 0.9212 |
| 0.0024 | 8.0 | 3264 | 0.2396 | 0.9714 | 0.9249 | 0.9095 | 0.9409 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
RiversHaveWings/minihf_evaluator_openllama_7b | RiversHaveWings | 2023-07-14T20:35:57Z | 7 | 0 | peft | [
"peft",
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2023-07-14T19:42:13Z | ---
library_name: peft
license: apache-2.0
---
# minihf_evaluator_openllama_7b
`minihf_evaluator_openllama_7b` is a LoRA instruct fine-tune of [OpenLLaMA 7B](https://huggingface.co/openlm-research/open_llama_7b).
The sequence `<|end|>` was used to separate the prompt and response. The correct way to prompt the model is: `Does 2 + 2 = 4?<|end|>`. The tokenizer will prepend a BOS token (`<s>`) by default. The response will end with an EOS token (`</s>`).
## Training procedure
`minihf_evaluator_openllama_7b` was fine-tuned for 100,000 examples on 90% [Muennighoff/flan](https://huggingface.co/datasets/Muennighoff/flan) / 10% [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) using batch size 4 per GPU on 8 40GB A100 GPUs. Examples where the prompt and response would not fit into 2,048 tokens were dropped. The fine-tuning was done using the following command:
```bash
accelerate launch make_evaluator.py --output-dir minihf_evaluator_openllama_7b
```
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
hseokool/vicuna-13b-v1.3-230623-10 | hseokool | 2023-07-14T20:35:33Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-14T20:35:31Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
n3bbb/distilbert-base-uncased-finetuned-cola | n3bbb | 2023-07-14T20:34:39Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-14T19:20:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5514555448601601
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8001
- Matthews Correlation: 0.5515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5242 | 1.0 | 535 | 0.5338 | 0.4221 |
| 0.3484 | 2.0 | 1070 | 0.4976 | 0.4779 |
| 0.2417 | 3.0 | 1605 | 0.5211 | 0.5452 |
| 0.1765 | 4.0 | 2140 | 0.7580 | 0.5282 |
| 0.1269 | 5.0 | 2675 | 0.8001 | 0.5515 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
fontanap/q-FrozenLake-v1-4x4-noSlippery | fontanap | 2023-07-14T20:27:22Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T20:27:20Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fontanap/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NICFRU/bart-base-paraphrasing-science | NICFRU | 2023-07-14T20:22:09Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-14T19:29:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-paraphrasing
results: []
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-paraphrasing
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.309600
- Rouge1: 37.346600
- Rouge2: 31.232000
- Rougel: 35.649300
- Rougelsum: 36.620700
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 27
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.272100| 15| 1| 0.728453| 35.610300| 28.460200| 33.443200| 34.660100| 19.957500 |
| 0.828400| 30| 1| 0.672391| 35.944600| 29.183200| 33.994800| 35.159600| 19.961500 |
| 0.750400| 45| 1| 0.621431| 36.373600| 29.659600| 34.441700| 35.605300| 19.977000 |
| 0.728900| 60| 1| 0.597063| 36.034900| 29.380400| 34.177700| 35.257100| 19.970500 |
| 0.699800| 75| 1| 0.585529| 35.308700| 28.488400| 33.353300| 34.456300| 19.971500 |
| 0.698900| 90| 1| 0.560137| 35.956300| 29.453500| 34.155300| 35.154300| 19.970500 |
| 0.669100| 105| 1| 0.555273| 36.017400| 29.399500| 34.099400| 35.162900| 19.972500 |
| 0.637600| 120| 1| 0.551375| 36.357600| 29.783200| 34.549300| 35.561400| 19.976000 |
| 0.653500| 135| 1| 0.530873| 36.578900| 30.080800| 34.764800| 35.789300| 19.970000 |
| 0.597800| 150| 1| 0.528142| 36.219800| 29.791800| 34.459700| 35.407700| 19.974000 |
| 0.626600| 165| 1| 0.510571| 36.251200| 29.698900| 34.422100| 35.432400| 19.972500 |
| 0.585100| 180| 1| 0.504067| 36.191500| 29.743700| 34.408400| 35.401300| 19.969000 |
| 0.576700| 195| 1| 0.495318| 36.648900| 30.248700| 34.869300| 35.885300| 19.974000 |
| 0.549200| 210| 1| 0.494409| 36.392600| 30.035800| 34.577200| 35.623000| 19.972000 |
| 0.570000| 225| 1| 0.479456| 36.339100| 29.928200| 34.589300| 35.569900| 19.965500 |
| 0.550600| 240| 1| 0.473431| 36.646300| 30.312000| 34.851800| 35.861300| 19.964500 |
| 0.566200| 255| 1| 0.471991| 36.514700| 30.070500| 34.630700| 35.685000| 19.968500 |
| 0.539100| 270| 1| 0.459127| 36.328600| 29.984900| 34.568200| 35.487200| 19.968500 |
| 0.527300| 285| 1| 0.449097| 36.541300| 30.132600| 34.705300| 35.714000| 19.968500 |
| 0.521300| 300| 2| 0.448960| 35.926800| 29.508400| 34.115800| 35.147400| 19.973000 |
| 0.471900| 315| 2| 0.443209| 36.748400| 30.365400| 34.966500| 35.956900| 19.968500 |
| 0.499300| 330| 2| 0.439178| 36.783700| 30.461400| 35.037900| 36.023900| 19.968500 |
| 0.473100| 345| 2| 0.422886| 36.773600| 30.514500| 35.021000| 35.998200| 19.973500 |
| 0.459500| 360| 2| 0.422479| 37.235700| 30.945100| 35.394200| 36.474400| 19.970000 |
| 0.454900| 375| 2| 0.421957| 36.685800| 30.390300| 34.903800| 35.925700| 19.968500 |
| 0.456400| 390| 2| 0.427490| 36.233400| 29.811500| 34.441800| 35.424200| 19.971000 |
| 0.446300| 405| 2| 0.420770| 36.860900| 30.457600| 35.035000| 36.068700| 19.968500 |
| 0.462600| 420| 2| 0.421138| 36.468000| 29.979500| 34.586800| 35.633500| 19.971000 |
| 0.432000| 435| 2| 0.411133| 37.028300| 30.761300| 35.271100| 36.271500| 19.971500 |
| 0.470200| 450| 2| 0.411541| 36.740200| 30.499000| 34.988000| 35.977300| 19.968000 |
| 0.447200| 465| 2| 0.402041| 37.204600| 30.997600| 35.446300| 36.492300| 19.960500 |
| 0.461100| 480| 2| 0.409818| 36.912900| 30.706900| 35.156600| 36.150000| 19.966500 |
| 0.448500| 495| 2| 0.412397| 36.813800| 30.550000| 35.086000| 36.037500| 19.965000 |
| 0.440700| 510| 2| 0.409341| 36.976300| 30.703900| 35.230000| 36.203300| 19.968000 |
| 0.463100| 525| 2| 0.409853| 37.053500| 30.862000| 35.364300| 36.332600| 19.971000 |
| 0.460100| 540| 2| 0.405348| 36.580600| 30.349600| 34.859000| 35.823700| 19.966000 |
| 0.449700| 555| 2| 0.404055| 36.880000| 30.500300| 34.966900| 36.023600| 19.973500 |
| 0.445900| 570| 2| 0.401167| 37.105100| 30.894400| 35.349100| 36.337700| 19.969500 |
| 0.473600| 585| 2| 0.401274| 36.506000| 30.272000| 34.790700| 35.759000| 19.971000 |
| 0.435400| 600| 3| 0.404944| 37.093100| 30.850100| 35.391800| 36.369500| 19.971500 |
| 0.414500| 615| 3| 0.400146| 36.936300| 30.789200| 35.195400| 36.203700| 19.966500 |
| 0.395000| 630| 3| 0.400189| 37.110100| 30.915400| 35.420800| 36.338100| 19.966500 |
| 0.405000| 645| 3| 0.401724| 36.860300| 30.623400| 35.093600| 36.080900| 19.969500 |
| 0.403400| 660| 3| 0.405606| 36.777100| 30.546200| 35.065500| 36.000200| 19.969500 |
| 0.398700| 675| 3| 0.403438| 36.531700| 30.283400| 34.829400| 35.730400| 19.969500 |
| 0.398900| 690| 3| 0.396970| 36.871100| 30.672100| 35.157400| 36.047400| 19.970000 |
| 0.378900| 705| 3| 0.413375| 37.082500| 30.848200| 35.339000| 36.312200| 19.966000 |
| 0.391600| 720| 3| 0.395604| 37.091600| 30.925600| 35.404200| 36.360200| 19.969500 |
| 0.374400| 735| 3| 0.398041| 37.287600| 31.112700| 35.548900| 36.543700| 19.969000 |
| 0.390600| 750| 3| 0.399400| 37.050800| 30.844900| 35.278000| 36.281900| 19.969500 |
| 0.398800| 765| 3| 0.391213| 37.260900| 31.090300| 35.493200| 36.499800| 19.961500 |
| 0.391300| 780| 3| 0.392255| 37.062100| 30.859300| 35.327400| 36.311500| 19.968000 |
| 0.414400| 795| 3| 0.390236| 37.043600| 30.738100| 35.249800| 36.285500| 19.968000 |
| 0.369700| 810| 3| 0.390666| 36.889500| 30.710500| 35.129200| 36.129500| 19.968000 |
| 0.372800| 825| 3| 0.389744| 37.012200| 30.853800| 35.225400| 36.279300| 19.966000 |
| 0.380400| 840| 3| 0.389610| 36.834300| 30.671600| 35.048900| 36.063700| 19.966000 |
| 0.369000| 855| 3| 0.385031| 37.137800| 31.043000| 35.421100| 36.393500| 19.964500 |
| 0.386700| 870| 3| 0.394869| 36.993300| 30.773100| 35.204100| 36.215400| 19.966000 |
| 0.389100| 885| 3| 0.387872| 36.994300| 30.764100| 35.276000| 36.250300| 19.969500 |
| 0.381400| 900| 4| 0.384406| 37.118600| 30.899300| 35.351600| 36.380200| 19.969500 |
| 0.372500| 915| 4| 0.386666| 37.036800| 31.053500| 35.317800| 36.293100| 19.966000 |
| 0.351100| 930| 4| 0.390876| 36.950600| 30.806400| 35.247800| 36.190500| 19.963000 |
| 0.349200| 945| 4| 0.391693| 37.173400| 31.020000| 35.406700| 36.414900| 19.966000 |
| 0.350500| 960| 4| 0.383120| 37.257700| 31.094200| 35.502400| 36.498700| 19.966000 |
| 0.390000| 975| 4| 0.384534| 37.103900| 30.999200| 35.392100| 36.383800| 19.966000 |
| 0.343500| 990| 4| 0.384099| 37.074300| 30.941700| 35.361400| 36.334900| 19.969500 |
| 0.347800| 1005| 4| 0.387656| 37.011900| 30.834300| 35.252600| 36.246700| 19.968 |
| 0.359200| 1020| 4| 0.385008| 37.240300| 31.078300| 35.499300| 36.470500| 19.968 |
| 0.344100| 1035| 4| 0.384319| 37.118000| 31.010800| 35.419600| 36.401000| 19.966 |
| 0.344200| 1050| 4| 0.390927| 36.891900| 30.697800| 35.141600| 36.116600| 19.969 |
| 0.353900| 1065| 4| 0.384563| 36.790300| 30.613100| 35.060500| 36.012600| 19.969 |
| 0.354300| 1080| 4| 0.380220| 37.132800| 31.021100| 35.420000| 36.377800| 19.964 |
| 0.348800| 1095| 4| 0.381104| 37.158700| 31.000300| 35.437500| 36.430800| 19.961 |
| 0.349900| 1110| 4| 0.385718| 37.154600| 30.992800| 35.406500| 36.413500| 19.966 |
| 0.349200| 1125| 4| 0.382857| 37.023900| 30.929500| 35.318300| 36.293200| 19.970 |
| 0.351800| 1140| 4| 0.380331| 37.171800| 31.037000| 35.480200| 36.478400| 19.965 |
| 0.348700| 1155| 4| 0.384382| 37.249000| 31.114500| 35.577100| 36.544200| 19.970 |
| 0.325800| 1170| 4| 0.382947| 37.177400| 31.042000| 35.460600| 36.450300| 19.968 |
| 0.351700| 1185| 4| 0.379098| 37.160700| 30.966800| 35.463100| 36.449000| 19.969 |
| 0.329400| 1200| 5| 0.379832| 37.211700| 31.117400| 35.520400| 36.500100| 19.965 |
| 0.309000| 1215| 5| 0.383461| 37.303500| 31.183800| 35.599000| 36.614000| 19.970 |
| 0.321000| 1230| 5| 0.380275| 37.177500| 31.081100| 35.462400| 36.473800| 19.963 |
| 0.309200| 1245| 5| 0.381899| 37.235800| 31.197100| 35.568800| 36.528000| 19.966 |
| 0.326700| 1260| 5| 0.381356| 37.410200| 31.257300| 35.671300| 36.697000| 19.969 |
| 0.324700| 1275| 5| 0.378781| 37.407900| 31.322100| 35.681000| 36.683100| 19.965 |
| 0.303200| 1290| 5| 0.381087| 37.355700| 31.308400| 35.665500| 36.628000| 19.965 |
| 0.335000| 1305| 5| 0.380627| 37.274800| 31.243800| 35.603400| 36.559800| 19.966 |
| 0.349300| 1320| 5| 0.376487| 37.299100| 31.221000| 35.611200| 36.573400| 19.963 |
| 0.302400| 1335| 5| 0.380785| 37.333500| 31.293000| 35.679900| 36.650200| 19.966 |
| 0.309400| 1350| 5| 0.381105| 37.280400| 31.195800| 35.611700| 36.565100| 19.969 |
| 0.322900| 1365| 5| 0.379658| 37.368200| 31.276900| 35.680000| 36.654900| 19.969 |
| 0.334700| 1380| 5| 0.381676| 37.362700| 31.288900| 35.680600| 36.643600| 19.968 |
| 0.323700| 1395| 5| 0.379920| 37.312300| 31.204800| 35.614800| 36.583400| 19.968 |
| 0.334700| 1410| 5| 0.379366| 37.310300| 31.205600| 35.636400| 36.595200| 19.969 |
| 0.327300| 1425| 5| 0.378289| 37.275400| 31.172700| 35.575500| 36.549500| 19.969 |
| 0.326400| 1440| 5| 0.378255| 37.270000| 31.164000| 35.582100| 36.543800| 19.969 |
| 0.326600| 1455| 5| 0.377739| 37.300000| 31.205400| 35.621500| 36.586100| 19.969 |
| 0.335700| 1470| 5| 0.377524| 37.287400| 31.189800| 35.608700| 36.578000| 19.970 |
| 0.309600| 1485| 5| 0.377617| 37.346600| 31.232000| 35.649300| 36.620700| 19.969 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3 |
davej23/distilhubert-finetuned-gtzan | davej23 | 2023-07-14T20:20:33Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-14T18:19:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4577
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8254 | 1.0 | 113 | 1.8353 | 0.48 |
| 1.2492 | 2.0 | 226 | 1.4297 | 0.57 |
| 1.0203 | 3.0 | 339 | 0.9814 | 0.69 |
| 0.633 | 4.0 | 452 | 0.7345 | 0.83 |
| 0.5642 | 5.0 | 565 | 0.6213 | 0.8 |
| 0.3219 | 6.0 | 678 | 0.5763 | 0.84 |
| 0.1772 | 7.0 | 791 | 0.4850 | 0.86 |
| 0.2427 | 8.0 | 904 | 0.4841 | 0.86 |
| 0.1397 | 9.0 | 1017 | 0.4760 | 0.86 |
| 0.4494 | 10.0 | 1130 | 0.4577 | 0.86 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ontel/niovilorrra | ontel | 2023-07-14T19:54:38Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-14T19:53:27Z | ---
license: creativeml-openrail-m
---
|
Rui31415/q-FrozenLake-v1-4x4-noSlippery | Rui31415 | 2023-07-14T19:50:52Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T19:50:49Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Rui31415/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
chaojiang06/arxiv-sentence-alignment | chaojiang06 | 2023-07-14T19:42:21Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:2210.15067",
"endpoints_compatible",
"region:us"
] | null | 2023-02-19T21:55:34Z |
# Checkpoints for [arXivEdits paper](https://arxiv.org/pdf/2210.15067.pdf). Please see more details at the [github repo](https://github.com/chaojiang06/arXivEdits/tree/main).
|
chaojiang06/arXivEdits-intention-classifier-T5-large-fine-grained | chaojiang06 | 2023-07-14T19:41:54Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"arxiv:2210.15067",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-14T18:59:20Z | ---
tags:
- generated_from_trainer
model-index:
- name: arXivEdits-intention-classifier-T5-large-fine-grained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Checkpoints for [arXivEdits paper](https://arxiv.org/pdf/2210.15067.pdf). Please see more details at the [github repo](https://github.com/chaojiang06/arXivEdits/tree/main).
# arXivEdits-intention-classifier-T5-large-fine-grained
This model is a fine-tuned version of [tmp/tst-translation355](https://huggingface.co/tmp/tst-translation355) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.11.6
|
YanJiangJerry/SA-berttweet-large-e6-w2-1-b16-w0.01 | YanJiangJerry | 2023-07-14T19:35:33Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-14T18:56:29Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-berttweet-large-e6-w2-1-b16-w0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-berttweet-large-e6-w2-1-b16-w0.01
This model is a fine-tuned version of [vinai/bertweet-large](https://huggingface.co/vinai/bertweet-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4510
- Accuracy: 0.935
- F1: 0.9423
- Precision: 0.9432
- Recall: 0.9415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 285 | 0.2599 | 0.871 | 0.8714 | 0.9954 | 0.7748 |
| 0.3039 | 2.0 | 570 | 0.2502 | 0.929 | 0.9371 | 0.9363 | 0.9379 |
| 0.3039 | 3.0 | 855 | 0.4228 | 0.923 | 0.9331 | 0.9148 | 0.9521 |
| 0.1246 | 4.0 | 1140 | 0.4102 | 0.934 | 0.9414 | 0.9431 | 0.9397 |
| 0.1246 | 5.0 | 1425 | 0.4532 | 0.933 | 0.9407 | 0.9398 | 0.9415 |
| 0.0379 | 6.0 | 1710 | 0.4510 | 0.935 | 0.9423 | 0.9432 | 0.9415 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
janimo/taxiv3 | janimo | 2023-07-14T19:24:28Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T19:24:25Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxiv3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="janimo/taxiv3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
w601sxs/pythia-70m-instruct-orca-chkpt-64000 | w601sxs | 2023-07-14T19:16:16Z | 171 | 1 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"dataset:Open-Orca/OpenOrca",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-14T18:39:56Z | ---
datasets:
- Open-Orca/OpenOrca
---
To use, do:
```
from peft import PeftModel, PeftConfig
from transformers import AutoTokenizer
ref_model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-70m-deduped-v0", torch_dtype=torch.bfloat16)
peft_model_id = "w601sxs/pythia-70m-instruct-orca-chkpt-64000"
config = PeftConfig.from_pretrained(peft_model_id)
model = PeftModel.from_pretrained(ref_model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model = model.to('cuda:0')
model.eval()
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=10)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
```
### Prompt format
```
context: < ... >
question: < ... >
answer: < ... >
```
For e.g.
```
context: <You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.>
question: <Here is some data: The Rice Boat eatType restaurant; The Rice Boat food Fast food; The Rice Boat familyFriendly yes; The Rice Boat near Express by Holiday Inn.
Write a sentence that describes this data:>
answer: <
``` |
TencentARC/t2iadapter_keypose_sd14v1 | TencentARC | 2023-07-14T19:01:13Z | 11 | 2 | diffusers | [
"diffusers",
"region:us"
] | null | 2023-07-14T19:01:13Z | ---
duplicated_from: diffusers/t2iadapter_keypose_sd14v1
---
|
YanJiangJerry/SA-roberta-e3-w2-1-b16-w0.01-data2 | YanJiangJerry | 2023-07-14T18:53:37Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-14T18:22:38Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-roberta-e3-w2-1-b16-w0.01-data2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-roberta-e3-w2-1-b16-w0.01-data2
This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5272
- Accuracy: 0.9032
- F1: 0.8664
- Precision: 0.8924
- Recall: 0.8418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2717 | 1.0 | 581 | 0.3400 | 0.9132 | 0.8811 | 0.9003 | 0.8627 |
| 0.1102 | 2.0 | 1162 | 0.5082 | 0.9021 | 0.8706 | 0.8580 | 0.8836 |
| 0.0525 | 3.0 | 1743 | 0.5272 | 0.9032 | 0.8664 | 0.8924 | 0.8418 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Huamin/santacoder-finetuned-the-stack-bash | Huamin | 2023-07-14T18:49:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"custom_code",
"license:bigcode-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T19:30:54Z | ---
license: bigcode-openrail-m
tags:
- generated_from_trainer
model-index:
- name: santacoder-finetuned-the-stack-bash
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# santacoder-finetuned-the-stack-bash
This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6951 | 0.1 | 500 | 1.8041 |
| 1.69 | 0.2 | 1000 | 1.5214 |
| 1.3821 | 0.3 | 1500 | 1.5855 |
| 1.5861 | 0.4 | 2000 | 1.4657 |
| 1.6196 | 0.5 | 2500 | 1.4089 |
| 1.6839 | 0.6 | 3000 | 1.3801 |
| 1.3929 | 0.7 | 3500 | 1.3493 |
| 1.471 | 0.8 | 4000 | 1.3278 |
| 1.3222 | 0.9 | 4500 | 1.3203 |
| 1.4529 | 1.0 | 5000 | 1.3174 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
franklab/HSSM | franklab | 2023-07-14T18:48:22Z | 0 | 2 | null | [
"onnx",
"license:bsd-2-clause",
"region:us"
] | null | 2023-06-22T16:08:25Z | ---
license: bsd-2-clause
---
# Utilizing Custom ONNX Models Stored in Hugging Face within HSSM
This guide will walk you through the process of using custom ONNX models stored in Hugging Face within HSSM (Hierarchical State Space Model) framework.
## Prerequisites
1. Python 3.8 or later.
2. HSSM library installed in your Python environment.
3. A pre-trained ONNX model stored on Hugging Face model hub.
## Step-by-step guide
### Step 1: Import necessary libraries
```
import pandas as pd
import hssm
import ssms.basic_simulators
pytensor.config.floatX = "float32"
```
### Step 2: Define HSSM Configuration
You will have to define the configuration of your model. Make sure you are defining the log-likelihood kind as "approx_differentiable" and providing the Hugging Face model name in the loglik field.
```
my_hssm = hssm.HSSM(
data=dataset_lan,
loglik_kind = "approx_differentiable",
loglik = "levy.onnx",
model="custom",
model_config= {
"backend": "jax",
"list_params": ["v", "a", "z", "alpha", "t"],
"bounds": {
"v": (-3.0, 3.0),
"a": (0.3, 3.0),
"z": (0.1, 0.9),
"alpha": (1.0, 2.0),
"t": (1e-3, 2.0),
},
}
)
```
This creates an HSSM object my_hssm using the custom ONNX model levy.onnx from the Hugging Face repository.
```
my_hssm.sample(cores=2, draws=500, tune=500, mp_ctx="forkserver")
```
# Uploading ONNX Files to a Hugging Face Repository
If your ONNX file is not currently housed in your Hugging Face repository, you can include it by adhering to the steps delineated below:
1. Import the HfApi module from huggingface_hub:
```
from huggingface_hub import HfApi
```
2. Upload the ONNX file using the upload_file method:
```
api = HfApi()
api.upload_file(
path_or_fileobj="test.onnx",
path_in_repo="test.onnx",
repo_id="franklab/HSSM",
repo_type="model",
create_pr=True,
)
```
The execution of these steps will generate a Pull Request (PR) on Hugging Face, which will subsequently be evaluated by a member of our team.
## Creating a Pull Request and a New ONNX Model
1. **Creating a Pull Request on Hugging Face**
Navigate to the following link: [Hugging Face PR](https://huggingface.co/franklab/HSSM/blob/refs%2Fpr%2F1/test.onnx)
By doing so, you will **generate a Pull Request on Hugging Face**, which will be reviewed by our team members.
2. **Creating a Custom ONNX Model**
### Establish a Network Config and State Dictionary Files in PyTorch
To construct a custom model and save it as an ONNX file, you must create a network configuration file and a state dictionary file in PyTorch. Refer to the instructions outlined in the README of the [LANFactory package](LINK_TO_LANFACTORY_PACKAGE).
### Convert Network Config and State Dictionary Files to ONNX
Once you've generated the network configuration and state dictionary files, you will need to **convert these files into an ONNX format**.
|
Danish-summarisation/DanSumT5-large | Danish-summarisation | 2023-07-14T18:43:21Z | 29 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"da",
"arxiv:1804.11283",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-04-10T12:21:06Z | ---
pipeline_tag: summarization
license: apache-2.0
language:
- da
---
# mT5-large fine-tuned for News article Summarisation ✏️🧾
[Google's mT5](https://aclanthology.org/2021.naacl-main.41/) for **summarisation** downstream task.
# Model summary
This repository contains a model for Danish abstractive summarisation of news articles. The summariser is based on a language-specific mT5-large.
The model is fine-tuned using an abstractive subset of the DaNewsroom dataset (Varab & Schluter, 2020), according to the binned density categories employed in Newsroom (Grusky et al., 2019).
# References
Grusky, M., Naaman, M., & Artzi, Y. (2018). Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies. ArXiv:1804.11283 [Cs]. http://arxiv.org/abs/1804.11283
Varab, D., & Schluter, N. (2020). DaNewsroom: A Large-scale Danish Summarisation Dataset. Proceedings of the 12th Language Resources and Evaluation Conference, 6731–6739. https://aclanthology.org/2020.lrec-1.831
|
absolutt/ppo-LunarLander-v2-1stTry | absolutt | 2023-07-14T18:24:14Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T18:23:51Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.71 +/- 21.38
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anushakamath/product_recommendation | anushakamath | 2023-07-14T17:39:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-14T17:17:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: product_recommendation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# product_recommendation
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4953
- Rouge1: 73.0159
- Rouge2: 66.6667
- Rougel: 72.2222
- Rougelsum: 72.2222
- Gen Len: 4.1905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 0.96 | 6 | 0.4314 | 60.3175 | 47.6190 | 59.8413 | 60.3175 | 4.1429 |
| No log | 1.96 | 12 | 0.4339 | 52.6984 | 38.0952 | 53.1746 | 52.3810 | 4.0952 |
| No log | 2.96 | 18 | 0.5350 | 65.0794 | 52.3810 | 64.2857 | 64.9206 | 4.4286 |
| No log | 3.96 | 24 | 0.3075 | 72.8571 | 61.9048 | 72.1429 | 72.1429 | 4.1905 |
| No log | 4.96 | 30 | 0.4016 | 74.6032 | 66.6667 | 74.6032 | 75.3968 | 4.3333 |
| No log | 5.96 | 36 | 0.4496 | 76.1905 | 71.4286 | 74.6032 | 74.6032 | 4.1905 |
| No log | 6.96 | 42 | 0.5539 | 60.3175 | 57.1429 | 61.9048 | 60.3175 | 4.0 |
| No log | 7.96 | 48 | 0.3816 | 80.9524 | 76.1905 | 79.3651 | 79.3651 | 4.1905 |
| No log | 8.96 | 54 | 0.4602 | 74.6032 | 71.4286 | 74.6032 | 74.6032 | 4.1429 |
| No log | 9.96 | 60 | 0.4953 | 73.0159 | 66.6667 | 72.2222 | 72.2222 | 4.1905 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.0.1+cu118
- Datasets 2.8.0
- Tokenizers 0.13.3
|
CMunch/fine_tuned_temp_real | CMunch | 2023-07-14T17:34:39Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-13T17:35:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: fine_tuned_temp_real
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93116
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_temp_real
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2334
- Accuracy: 0.9312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2349 | 1.0 | 1563 | 0.1965 | 0.9247 |
| 0.1521 | 2.0 | 3126 | 0.2334 | 0.9312 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chunwoolee0/my_paircls_klue_nli_beomi_kcbert_base_model | chunwoolee0 | 2023-07-14T17:24:46Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-14T16:23:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: my_paircls_klue_nli_beomi_kcbert_base_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_paircls_klue_nli_beomi_kcbert_base_model
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 391 | 1.4004 |
| 0.1479 | 2.0 | 782 | 1.2491 |
| 0.167 | 3.0 | 1173 | 1.3786 |
| 0.0803 | 4.0 | 1564 | 1.7437 |
| 0.0803 | 5.0 | 1955 | 1.9825 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MaitreHibou/poca-SoccerTwos | MaitreHibou | 2023-07-14T17:17:10Z | 37 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-07-14T17:16:57Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MaitreHibou/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Arup-Dutta-Bappy/bert-large-uncased-whole-word-masking-finetuned-squad | Arup-Dutta-Bappy | 2023-07-14T17:09:41Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-14T14:51:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-large-uncased-whole-word-masking-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-finetuned-squad
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking](https://huggingface.co/bert-large-uncased-whole-word-masking) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DanGalt/distilhubert-finetuned-gtzan | DanGalt | 2023-07-14T17:07:56Z | 167 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-02T16:13:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7162
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
- label_smoothing_factor: 0.05
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5923 | 1.0 | 113 | 1.7310 | 0.44 |
| 1.2071 | 2.0 | 226 | 1.2546 | 0.62 |
| 1.0673 | 3.0 | 339 | 0.9320 | 0.76 |
| 0.8149 | 4.0 | 452 | 0.8768 | 0.81 |
| 0.4999 | 5.0 | 565 | 0.7154 | 0.86 |
| 0.3562 | 6.0 | 678 | 0.6631 | 0.89 |
| 0.3852 | 7.0 | 791 | 0.7136 | 0.87 |
| 0.4476 | 8.0 | 904 | 0.7162 | 0.88 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
efainman/ppo-LunarLander-v2 | efainman | 2023-07-14T17:05:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T17:04:52Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.83 +/- 21.59
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sotoy/path_to_saved_model | sotoy | 2023-07-14T16:56:29Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-14T13:32:56Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - sotoy/path_to_saved_model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
manosp/audio_inversion_cat | manosp | 2023-07-14T16:46:07Z | 48 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-13T13:04:34Z |
---
license: creativeml-openrail-m
base_model: /home/plitsis/text-inv/audioldm-m-full
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - manosp/audio_inversion_cat
These are textual inversion adaption weights for /home/plitsis/text-inv/audioldm-m-full. You can find some example images in the following.
|
chh6/dqn-SpaceInvadersNoFrameskip-v4 | chh6 | 2023-07-14T16:41:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T16:40:42Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 481.00 +/- 179.37
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga chh6 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga chh6 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga chh6
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Akriel/ResNetYoloV1 | Akriel | 2023-07-14T16:36:14Z | 0 | 0 | null | [
"tensorboard",
"computer_vision",
"vision_models_playground",
"custom-implementation",
"region:us"
] | null | 2023-07-10T16:40:25Z | ---
tags:
- computer_vision
- vision_models_playground
- custom-implementation
---
# **Vision Models Playground**
This is a trained model from the Vision Models Playground repository.
Link to the repository: https://github.com/Akrielz/vision_models_playground
## **Model**
This model is a custom implementation of **ResNetYoloV1** from the ```vision_models_playground.models.segmentation.yolo_v1``` module.
Please look in the config file for more information about the model architecture.
## **Usage**
To load the torch model, you can use the following code snippet:
```python
import torch
from vision_models_playground.utility.hub import load_vmp_model_from_hub
model = load_vmp_model_from_hub("Akriel/ResNetYoloV1")
x = torch.randn(...)
y = model(x) # y will be of type torch.Tensor
```
To load the pipeline that includes the model, you can use the following code snippet:
```python
from vision_models_playground.utility.hub import load_vmp_pipeline_from_hub
pipeline = load_vmp_pipeline_from_hub("Akriel/ResNetYoloV1")
x = raw_data # raw_data will be of type pipeline.input_type
y = pipeline(x) # y will be of type pipeline.output_type
```
## **Metrics**
The model was evaluated on the following dataset: **YoloPascalVocDataset** from ```vision_models_playground.datasets.yolo_pascal_voc_dataset```
These are the results of the evaluation:
- MulticlassAccuracy: 0.7241
- MulticlassAveragePrecision: 0.7643
- MulticlassAUROC: 0.9684
- Dice: 0.7241
- MulticlassF1Score: 0.7241
- LossTracker: 4.1958
## **Additional Information**
The train and evaluation runs are also saved using tensorboard. You can use the following command to visualize the runs:
```bash
tensorboard --logdir ./model
```
```bash
tensorboard --logdir ./eval
``` |
giocs2017/distilhubert-finetuned-gtzan | giocs2017 | 2023-07-14T16:32:55Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-13T01:20:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8934
- Accuracy: 0.82
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.681 | 1.0 | 450 | 1.7351 | 0.5 |
| 1.5534 | 2.0 | 900 | 1.2192 | 0.66 |
| 0.6835 | 3.0 | 1350 | 1.0462 | 0.71 |
| 1.069 | 4.0 | 1800 | 0.5503 | 0.83 |
| 0.1563 | 5.0 | 2250 | 0.9394 | 0.78 |
| 0.0077 | 6.0 | 2700 | 0.9394 | 0.81 |
| 0.7444 | 7.0 | 3150 | 0.8934 | 0.82 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.13.1
- Tokenizers 0.13.2
|
grace-pro/afro-xlmr-base-igbo-2e-5 | grace-pro | 2023-07-14T16:26:52Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-14T15:52:27Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afro-xlmr-base-igbo-2e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-base-igbo-2e-5
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2264
- Precision: 0.7551
- Recall: 0.5122
- F1: 0.6104
- Accuracy: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2433 | 1.0 | 1257 | 0.2406 | 0.7508 | 0.3936 | 0.5165 | 0.9160 |
| 0.203 | 2.0 | 2514 | 0.2336 | 0.7680 | 0.4294 | 0.5509 | 0.9206 |
| 0.1745 | 3.0 | 3771 | 0.2258 | 0.7637 | 0.4741 | 0.5850 | 0.9246 |
| 0.1585 | 4.0 | 5028 | 0.2276 | 0.7666 | 0.4908 | 0.5985 | 0.9264 |
| 0.1446 | 5.0 | 6285 | 0.2264 | 0.7551 | 0.5122 | 0.6104 | 0.9274 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
kartikkitukale61/RobertaSentenceSimilarityKartik | kartikkitukale61 | 2023-07-14T16:25:34Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-07-14T16:24:16Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 460 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 184,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
YanJiangJerry/SA-roberta-e3-w1-1.5-b16-mt4-w0.01 | YanJiangJerry | 2023-07-14T16:13:32Z | 93 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-14T15:53:49Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-roberta-e3-w1-1.5-b16-mt4-w0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-roberta-e3-w1-1.5-b16-mt4-w0.01
This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2790
- Accuracy: 0.94
- F1: 0.9470
- Precision: 0.9437
- Recall: 0.9504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 285 | 0.2093 | 0.915 | 0.9214 | 0.9632 | 0.8830 |
| 0.259 | 2.0 | 570 | 0.2161 | 0.935 | 0.9418 | 0.9512 | 0.9326 |
| 0.259 | 3.0 | 855 | 0.2790 | 0.94 | 0.9470 | 0.9437 | 0.9504 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
prognosis/cardio_qanda_openassistant_v2 | prognosis | 2023-07-14T15:56:38Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-07-14T13:04:24Z | ---
tags:
- generated_from_trainer
model-index:
- name: cardio_qanda_openassistant_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cardio_qanda_openassistant_v2
This model is a fine-tuned version of [prognosis/falcon7b_merged](https://huggingface.co/prognosis/falcon7b_merged) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 1500
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-LoRA | bhenrym14 | 2023-07-14T15:50:00Z | 0 | 1 | null | [
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"region:us"
] | null | 2023-07-14T02:51:39Z | ---
datasets:
- jondurbin/airoboros-gpt4-1.4.1
---
NOTE: This LoRA was trained on Llama-30b AFTER additional pretraining. I intend on providing the LoRA of that pretraining too. Applying this LoRA to base Llama-30b will likely result in a performance reduction. I have uploaded the fp16 merged weights [here](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-LoRA/)
Mostly untested!
Find GPTQ quantized weights and full model card here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GPTQ
# RoPE Scaled QLoRA Fine-tune of Llama-33b on airoboros-gpt4-1.4.1 (LoRA)
## Overview
This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (LoRA) with several key modifications:
- Context length extended to 16384 by RoPE Scaled Embeddings.
- The Llama-33b base model is pretrained for additional 100 steps on 8192 length sequences from the pile dataset.
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
**This is a QLoRA fine-tune**
Pretraining took 10 hours. Finetuning took ~41 hours on 1x RTX 6000 Ada. |
Dlychan/Nadhieraa | Dlychan | 2023-07-14T15:47:59Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-14T15:42:59Z | ---
license: creativeml-openrail-m
---
|
NasimB/gpt2-concat-bnc-rarity-end-1p6 | NasimB | 2023-07-14T15:36:48Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-14T13:43:24Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-bnc-rarity-end-1p6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-bnc-rarity-end-1p6
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7136 | 0.29 | 500 | 5.6426 |
| 5.3544 | 0.59 | 1000 | 5.2039 |
| 4.9956 | 0.88 | 1500 | 4.9562 |
| 4.7225 | 1.17 | 2000 | 4.8042 |
| 4.568 | 1.46 | 2500 | 4.6819 |
| 4.4551 | 1.76 | 3000 | 4.5728 |
| 4.3337 | 2.05 | 3500 | 4.5041 |
| 4.1427 | 2.34 | 4000 | 4.4590 |
| 4.1052 | 2.63 | 4500 | 4.3959 |
| 4.0696 | 2.93 | 5000 | 4.3454 |
| 3.8614 | 3.22 | 5500 | 4.3396 |
| 3.813 | 3.51 | 6000 | 4.3118 |
| 3.789 | 3.81 | 6500 | 4.2754 |
| 3.6879 | 4.1 | 7000 | 4.2741 |
| 3.5215 | 4.39 | 7500 | 4.2692 |
| 3.5205 | 4.68 | 8000 | 4.2563 |
| 3.5065 | 4.98 | 8500 | 4.2419 |
| 3.3459 | 5.27 | 9000 | 4.2548 |
| 3.3262 | 5.56 | 9500 | 4.2549 |
| 3.327 | 5.85 | 10000 | 4.2536 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
crcdng/ppo-LunarLander-v2 | crcdng | 2023-07-14T15:33:39Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T15:33:23Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.89 +/- 19.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YanJiangJerry/SA-roberta-e12-w1-1.5-b16-mt4-w0.01 | YanJiangJerry | 2023-07-14T15:31:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-14T14:13:19Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-roberta-e12-w1-1.5-b16-mt4-w0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-roberta-e12-w1-1.5-b16-mt4-w0.01
This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5163
- Accuracy: 0.946
- F1: 0.9523
- Precision: 0.9489
- Recall: 0.9557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 285 | 0.2656 | 0.886 | 0.9059 | 0.8472 | 0.9734 |
| 0.2593 | 2.0 | 570 | 0.1967 | 0.938 | 0.9453 | 0.9404 | 0.9504 |
| 0.2593 | 3.0 | 855 | 0.2726 | 0.925 | 0.9353 | 0.9109 | 0.9610 |
| 0.1239 | 4.0 | 1140 | 0.3039 | 0.942 | 0.9481 | 0.9567 | 0.9397 |
| 0.1239 | 5.0 | 1425 | 0.3721 | 0.935 | 0.9421 | 0.9463 | 0.9379 |
| 0.053 | 6.0 | 1710 | 0.4110 | 0.939 | 0.9458 | 0.9483 | 0.9433 |
| 0.053 | 7.0 | 1995 | 0.4106 | 0.941 | 0.9481 | 0.9407 | 0.9557 |
| 0.0183 | 8.0 | 2280 | 0.4839 | 0.94 | 0.9470 | 0.9437 | 0.9504 |
| 0.0004 | 9.0 | 2565 | 0.4994 | 0.945 | 0.9516 | 0.9442 | 0.9592 |
| 0.0004 | 10.0 | 2850 | 0.5032 | 0.943 | 0.9496 | 0.9471 | 0.9521 |
| 0.0026 | 11.0 | 3135 | 0.5092 | 0.946 | 0.9523 | 0.9489 | 0.9557 |
| 0.0026 | 12.0 | 3420 | 0.5163 | 0.946 | 0.9523 | 0.9489 | 0.9557 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits