modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 06:27:35
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 06:24:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Realgon/N_bert_agnews_padding40model | Realgon | 2023-12-27T17:03:28Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T14:36:43Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- ag_news
metrics:
- accuracy
model-index:
- name: N_bert_agnews_padding40model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ag_news
type: ag_news
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9473684210526315
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# N_bert_agnews_padding40model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5661
- Accuracy: 0.9474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.1785 | 1.0 | 7500 | 0.1884 | 0.9421 |
| 0.1379 | 2.0 | 15000 | 0.1990 | 0.9478 |
| 0.1127 | 3.0 | 22500 | 0.2389 | 0.9408 |
| 0.0846 | 4.0 | 30000 | 0.2528 | 0.9492 |
| 0.0581 | 5.0 | 37500 | 0.3041 | 0.9436 |
| 0.0456 | 6.0 | 45000 | 0.3415 | 0.9468 |
| 0.0411 | 7.0 | 52500 | 0.4081 | 0.9430 |
| 0.0239 | 8.0 | 60000 | 0.4415 | 0.9433 |
| 0.0202 | 9.0 | 67500 | 0.4380 | 0.9404 |
| 0.0126 | 10.0 | 75000 | 0.4637 | 0.9425 |
| 0.0175 | 11.0 | 82500 | 0.4485 | 0.9455 |
| 0.0126 | 12.0 | 90000 | 0.4761 | 0.9449 |
| 0.0046 | 13.0 | 97500 | 0.5009 | 0.9455 |
| 0.0038 | 14.0 | 105000 | 0.4784 | 0.9482 |
| 0.0035 | 15.0 | 112500 | 0.5282 | 0.9451 |
| 0.0046 | 16.0 | 120000 | 0.5256 | 0.9464 |
| 0.0026 | 17.0 | 127500 | 0.5081 | 0.9501 |
| 0.0008 | 18.0 | 135000 | 0.5543 | 0.9467 |
| 0.0002 | 19.0 | 142500 | 0.5448 | 0.9488 |
| 0.0016 | 20.0 | 150000 | 0.5661 | 0.9474 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ullrichx/a2c-PandaReachDense-v3 | ullrichx | 2023-12-27T17:02:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-27T16:53:57Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.27 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tshirtstate/willemx_LoRA | tshirtstate | 2023-12-27T16:56:25Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-12-27T16:56:23Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: willemx
license: openrail++
---
# SDXL LoRA DreamBooth - tshirtstate/willemx_LoRA
<Gallery />
## Model description
These are tshirtstate/willemx_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use willemx to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](tshirtstate/willemx_LoRA/tree/main) them in the Files & versions tab.
|
ntc-ai/SDXL-LoRA-slider.hair-up | ntc-ai | 2023-12-27T16:51:03Z | 180 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2023-12-27T16:51:00Z |
---
language:
- en
thumbnail: "images/evaluate/hair up.../hair up_17_3.0.png"
widget:
- text: hair up
output:
url: images/hair up_17_3.0.png
- text: hair up
output:
url: images/hair up_19_3.0.png
- text: hair up
output:
url: images/hair up_20_3.0.png
- text: hair up
output:
url: images/hair up_21_3.0.png
- text: hair up
output:
url: images/hair up_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "hair up"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - hair up (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/hair up_17_-3.0.png" width=256 height=256 /> | <img src="images/hair up_17_0.0.png" width=256 height=256 /> | <img src="images/hair up_17_3.0.png" width=256 height=256 /> |
| <img src="images/hair up_19_-3.0.png" width=256 height=256 /> | <img src="images/hair up_19_0.0.png" width=256 height=256 /> | <img src="images/hair up_19_3.0.png" width=256 height=256 /> |
| <img src="images/hair up_20_-3.0.png" width=256 height=256 /> | <img src="images/hair up_20_0.0.png" width=256 height=256 /> | <img src="images/hair up_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
hair up
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.hair-up', weight_name='hair up.safetensors', adapter_name="hair up")
# Activate the LoRA
pipe.set_adapters(["hair up"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, hair up"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 670+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
Rezakakooee/marian-finetuned-kde4-en-to-fr | Rezakakooee | 2023-12-27T16:47:29Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-12-27T15:53:26Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.6964
- eval_bleu: 39.1660
- eval_runtime: 1604.2015
- eval_samples_per_second: 13.102
- eval_steps_per_second: 0.205
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
douglasadams11/roberta-large-ner-new | douglasadams11 | 2023-12-27T16:32:50Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-27T13:29:31Z | ---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-large-ner-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-ner-new
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1106
- Precision: 0.9670
- Recall: 0.9604
- F1: 0.9637
- Accuracy: 0.9600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1241 | 0.71 | 5000 | 0.1161 | 0.9618 | 0.9505 | 0.9561 | 0.9521 |
| 0.0993 | 1.42 | 10000 | 0.1132 | 0.9633 | 0.9568 | 0.9600 | 0.9562 |
| 0.0812 | 2.13 | 15000 | 0.1223 | 0.9662 | 0.9574 | 0.9618 | 0.9580 |
| 0.074 | 2.84 | 20000 | 0.1118 | 0.9661 | 0.9607 | 0.9634 | 0.9598 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
jameswatanabegoogle2024/ppo-SnowballTarget | jameswatanabegoogle2024 | 2023-12-27T16:26:47Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-12-27T16:26:44Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jameswatanabegoogle2024/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Prezily/gpt2-trial-r1 | Prezily | 2023-12-27T16:25:09Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-27T14:08:31Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-trial-r1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-trial-r1
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
santiviquez/t5-small-finetuned-samsum-en | santiviquez | 2023-12-27T16:17:25Z | 24 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:samsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-06-07T15:52:00Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
base_model: t5-small
model-index:
- name: t5-small-finetuned-samsum-en
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: samsum
type: samsum
args: samsum
metrics:
- type: rouge
value: 44.3313
name: Rouge1
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- type: rouge
value: 40.0386
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmRlMjZmNjQyYWQ5MjcyM2M2MzUwMjk5ZTQxOTg3NzY1NjAxY2FkNzY5OGI2YjcxYTg1Y2M1Y2M2NDM2YmI1YSIsInZlcnNpb24iOjF9.xxrRepLefbFAUWkOJwOenMuwQ8g4i2QkEUgB_d1YsAv2aRRQd0vPfiGCMltGEtCxqrgQ6vmndOlkXIJhCPV9CQ
- type: rouge
value: 15.8501
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjQ4ZDQ0OTM2ZjI3NGExYWRjNWNjNTYwNjA0YWE0NWVkODJmODAwZTYzZjU3NzVhNjRiM2Y3ZDFhYjIwMTcxOSIsInZlcnNpb24iOjF9.UnymHQUy2s5P8yNUkFRhj6drPkKviYUNN2yB9E1KvYssNpRWnUbD5X_cVfYGWXVLPrtYe9dc-f7vSvm2Z1ZtDA
- type: rouge
value: 31.8084
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTllNjQ2MGRjMTJkNmI3OWI5MTNmNWJjNmUyMTU1ZjkxYzkyNDg4MWI2MGU1NWI5NmZhMTFjNjE4ZTI5M2MyMiIsInZlcnNpb24iOjF9.rVGbelDJoVmcTD6OOQ7O8C_4LhrMMuYUniY_hAmmgZ8kU_wgtApwi6Ms1sgzqtvbF0cDHaLxejE9XPZ8ZDZMAA
- type: rouge
value: 36.0888
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWQyNmZmMjFkZTY2MDhjZmIzZDBkM2ZkYzUxZTcxMTcwMDVjMDdiMzljMjU2NDA5OTUxZTEwYzQwZjg2NDJmMiIsInZlcnNpb24iOjF9.ZEBUBcPLCURLXPN5upXDHaIVu_ilUEyvZd81nnppZCWEuULyp30jcpmzLFb91v0WwRHMDPIjPl0hlckzq71ICw
- type: loss
value: 2.1917073726654053
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjA0MDk3MWZiMDgxMDlkZDFjY2UwODM0MTk4MmY2NzlkNThmYTA0ODk5MzgyZWQwYjVlZGFlZmJmNjA2NDA2ZSIsInZlcnNpb24iOjF9.Wc_5Wpf_Wa0Xm0A7w2EYnF1_eQ-2QU_v6eXr8SHveBszH5YhZBW6GS3yKslVVKKIaAGSGKtLIHzMW1H-NqqNDA
- type: gen_len
value: 18.1074
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDFlMmU0MTAyMDM5M2UyZDA2N2U4MjQ3MjhjYjdkOGY1ODdlNDY1NWY3NTQ3MzBhOWE3OTk2ZGU3ZTYyNjU1ZCIsInZlcnNpb24iOjF9.Ob1cLE1iYpV00ae1RYRIUNZz7V-x8IYTcU6ofR5gf07PdRqfiOgZtpV0tN3yM0_nyAJI71J8fnC6yWq10Y0HBw
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-samsum-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9335
- Rouge1: 44.3313
- Rouge2: 20.71
- Rougel: 37.221
- Rougelsum: 40.9603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.4912 | 1.0 | 300 | 1.9043 | 44.1517 | 20.0186 | 36.6053 | 40.5164 |
| 1.5055 | 2.0 | 600 | 1.8912 | 44.1473 | 20.4456 | 37.069 | 40.6714 |
| 1.4852 | 3.0 | 900 | 1.8986 | 44.7536 | 20.8646 | 37.525 | 41.2189 |
| 1.4539 | 4.0 | 1200 | 1.9136 | 44.2144 | 20.3446 | 37.1088 | 40.7581 |
| 1.4262 | 5.0 | 1500 | 1.9215 | 44.2656 | 20.6044 | 37.3267 | 40.9469 |
| 1.4118 | 6.0 | 1800 | 1.9247 | 43.8793 | 20.4663 | 37.0614 | 40.6065 |
| 1.3987 | 7.0 | 2100 | 1.9256 | 43.9981 | 20.2703 | 36.7856 | 40.6354 |
| 1.3822 | 8.0 | 2400 | 1.9316 | 43.9732 | 20.4559 | 36.8039 | 40.5784 |
| 1.3773 | 9.0 | 2700 | 1.9314 | 44.3075 | 20.5435 | 37.0457 | 40.832 |
| 1.3795 | 10.0 | 3000 | 1.9335 | 44.3313 | 20.71 | 37.221 | 40.9603 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
afrideva/MiniChat-2-3B-GGUF | afrideva | 2023-12-27T16:11:06Z | 33 | 1 | transformers | [
"transformers",
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"zh",
"arxiv:2311.07052",
"arxiv:2310.05914",
"arxiv:2305.18290",
"base_model:GeneZC/MiniChat-2-3B",
"base_model:quantized:GeneZC/MiniChat-2-3B",
"license:apache-2.0",
"region:us"
] | text-generation | 2023-12-27T16:03:04Z | ---
base_model: GeneZC/MiniChat-2-3B
inference: false
language:
- en
- zh
library_name: transformers
license: apache-2.0
model_creator: GeneZC
model_name: MiniChat-2-3B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- text: "<s> [|User|] Hi \U0001F44B </s>[|Assistant|]"
---
# GeneZC/MiniChat-2-3B-GGUF
Quantized GGUF model files for [MiniChat-2-3B](https://huggingface.co/GeneZC/MiniChat-2-3B) from [GeneZC](https://huggingface.co/GeneZC)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [minichat-2-3b.fp16.gguf](https://huggingface.co/afrideva/MiniChat-2-3B-GGUF/resolve/main/minichat-2-3b.fp16.gguf) | fp16 | 6.04 GB |
| [minichat-2-3b.q2_k.gguf](https://huggingface.co/afrideva/MiniChat-2-3B-GGUF/resolve/main/minichat-2-3b.q2_k.gguf) | q2_k | 1.30 GB |
| [minichat-2-3b.q3_k_m.gguf](https://huggingface.co/afrideva/MiniChat-2-3B-GGUF/resolve/main/minichat-2-3b.q3_k_m.gguf) | q3_k_m | 1.51 GB |
| [minichat-2-3b.q4_k_m.gguf](https://huggingface.co/afrideva/MiniChat-2-3B-GGUF/resolve/main/minichat-2-3b.q4_k_m.gguf) | q4_k_m | 1.85 GB |
| [minichat-2-3b.q5_k_m.gguf](https://huggingface.co/afrideva/MiniChat-2-3B-GGUF/resolve/main/minichat-2-3b.q5_k_m.gguf) | q5_k_m | 2.15 GB |
| [minichat-2-3b.q6_k.gguf](https://huggingface.co/afrideva/MiniChat-2-3B-GGUF/resolve/main/minichat-2-3b.q6_k.gguf) | q6_k | 2.48 GB |
| [minichat-2-3b.q8_0.gguf](https://huggingface.co/afrideva/MiniChat-2-3B-GGUF/resolve/main/minichat-2-3b.q8_0.gguf) | q8_0 | 3.21 GB |
## Original Model Card:
## MiniChat-2-3B
📑 [arXiv](https://arxiv.org/abs/2311.07052) | 👻 [GitHub](https://github.com/GeneZC/MiniMA) | 🤗 [HuggingFace-MiniMA](https://huggingface.co/GeneZC/MiniMA-3B) | 🤗 [HuggingFace-MiniChat](https://huggingface.co/GeneZC/MiniChat-3B) | 🤖 [ModelScope-MiniMA](https://modelscope.cn/models/GeneZC/MiniMA-3B) | 🤖 [ModelScope-MiniChat](https://modelscope.cn/models/GeneZC/MiniChat-3B) | 🤗 [HuggingFace-MiniChat-1.5](https://huggingface.co/GeneZC/MiniChat-1.5-3B) | 🤗 [HuggingFace-MiniMA-2](https://huggingface.co/GeneZC/MiniMA-2-3B) | 🤗 [HuggingFace-MiniChat-2](https://huggingface.co/GeneZC/MiniChat-2-3B)
🆕 **Updates from MiniChat-3B**:
- better base model MiniMA-2-3B;
- better data mixture;
- use of [NEFTune](https://arxiv.org/abs/2310.05914);
- use of [DPO](https://arxiv.org/abs/2305.18290).
❗ Must comply with LICENSE of LLaMA2 since it is derived from LLaMA2.
A language model continued from MiniMA-3B and finetuned on both instruction and preference data.
Surpassing Vicuna-7B and approximating LLaMA-2-Chat-7B on MT-Bench.
<img src="./teaser_b.jpg" alt="teaser_b" width="687" />
**Standard Benchmarks**
|Method|TFLOPs|MMLU (5-shot)|CEval (5-shot)|DROP (3-shot)|HumanEval (0-shot)|BBH (3-shot)|GSM8K (8-shot)|
|--|--|--|--|--|--|--|--|
|Mamba-2.8B|4.6E9|25.58|24.74|15.72|7.32|29.37|3.49|
|ShearedLLaMA-2.7B|0.8E9|26.97|22.88|19.98|4.88|30.48|3.56|
|BTLM-3B|11.3E9|27.20|26.00|17.84|10.98|30.87|4.55|
|StableLM-3B|72.0E9|44.75|31.05|22.35|15.85|32.59|10.99|
|Qwen-1.8B|23.8E9|44.05|54.75|12.97|14.02|30.80|22.97|
|Phi-2-2.8B|159.9E9|56.74|34.03|30.74|46.95|44.13|55.42|
|LLaMA-2-7B|84.0E9|46.00|34.40|31.57|12.80|32.02|14.10|
||
|MiniMA-3B|4.0E9|28.51|28.23|22.50|10.98|31.61|8.11|
|MiniChat-3B|4.0E9|38.40|36.48|22.58|18.29|31.36|29.72|
|MiniMA-2-3B|13.4E9|40.14|44.65|23.10|14.63|31.43|8.87|
|MiniChat-2-3B|13.4E9|46.17|43.91|30.26|22.56|34.95|38.13|
**Instruction-following Benchmarks**
|Method|AlpacaEval|MT-Bench|
|--|--|--|
|GPT-4|95.28|9.18|
|Zephyr-7B-Beta|90.60|7.34|
|Phi-2-DPO|81.37|-|
|StableLM Zephyr 3B|76.00|6.64|
|Vicuna-7B|76.84|6.17|
|LLaMA-2-Chat-7B|71.37|6.27|
||
|MiniChat-3B|48.82|-|
|MiniChat-2-3B|77.30|6.23|
The following is an example code snippet to use MiniChat-2-3B:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from conversation import get_default_conv_template
# MiniChat
tokenizer = AutoTokenizer.from_pretrained("GeneZC/MiniChat-2-3B", use_fast=False)
# GPU.
model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniChat-2-3B", use_cache=True, device_map="auto", torch_dtype=torch.float16).eval()
# CPU.
# model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniChat-2-3B", use_cache=True, device_map="cpu", torch_dtype=torch.float16).eval()
conv = get_default_conv_template("minichat")
question = "Implement a program to find the common elements in two arrays without using any extra data structures."
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer([prompt]).input_ids
output_ids = model.generate(
torch.as_tensor(input_ids).cuda(),
do_sample=True,
temperature=0.7,
max_new_tokens=1024,
)
output_ids = output_ids[0][len(input_ids[0]):]
output = tokenizer.decode(output_ids, skip_special_tokens=True).strip()
# output: "def common_elements(arr1, arr2):\n if len(arr1) == 0:\n return []\n if len(arr2) == 0:\n return arr1\n\n common_elements = []\n for element in arr1:\n if element in arr2:\n common_elements.append(element)\n\n return common_elements"
# Multiturn conversation could be realized by continuously appending questions to `conv`.
```
## Bibtex
```bibtex
@article{zhang2023law,
title={Towards the Law of Capacity Gap in Distilling Language Models},
author={Zhang, Chen and Song, Dawei and Ye, Zheyu and Gao, Yan},
year={2023},
url={https://arxiv.org/abs/2311.07052}
}
``` |
Weyaxi/MetaMath-NeuralHermes-2.5-Mistral-7B-Linear | Weyaxi | 2023-12-27T16:02:09Z | 1,537 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-05T10:59:54Z | ---
license: apache-2.0
---
models:
- model: meta-math/MetaMath-Mistral-7B
parameters:
weight: 0.5
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
weight: 0.3
merge_method: linear
dtype: float16 |
sushistarlord/distilbert-base-uncased-finetuned-emotion-sushant | sushistarlord | 2023-12-27T15:57:30Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T15:06:09Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-sushant
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9135
- name: F1
type: f1
value: 0.9128218997521944
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-sushant
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2815
- Accuracy: 0.9135
- F1: 0.9128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.9420 | 0.6785 | 0.6074 |
| No log | 2.0 | 126 | 0.4520 | 0.8705 | 0.8593 |
| No log | 3.0 | 189 | 0.3137 | 0.9095 | 0.9084 |
| 0.6765 | 4.0 | 252 | 0.2815 | 0.9135 | 0.9128 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.4
- Tokenizers 0.15.0
|
Shaleen123/neural_medical_chat | Shaleen123 | 2023-12-27T15:52:09Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"region:us"
] | null | 2023-12-27T15:08:11Z | ---
library_name: peft
base_model: Intel/neural-chat-7b-v3-3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
bpugnaire/Reinforce-Pixelcopter-PLE-v0 | bpugnaire | 2023-12-27T15:30:16Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-27T15:29:40Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 38.60 +/- 29.75
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
we1kkk/llama2-hf-qlora-oasst1 | we1kkk | 2023-12-27T15:28:17Z | 0 | 0 | null | [
"safetensors",
"conversational",
"dataset:OpenAssistant/oasst1",
"region:us"
] | text-generation | 2023-07-21T02:18:15Z | ---
datasets:
- OpenAssistant/oasst1
pipeline_tag: conversational
---
Lora weight for Llama2 trained on oasst1 dataset using 4bit quantazation QLoRA. |
monsterapi/mistral_7b_DolphinCoder | monsterapi | 2023-12-27T15:17:19Z | 67 | 0 | peft | [
"peft",
"code",
"instruct",
"mistral",
"dataset:cognitivecomputations/dolphin-coder",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-12-21T02:04:03Z | ---
library_name: peft
tags:
- code
- instruct
- mistral
datasets:
- cognitivecomputations/dolphin-coder
base_model: mistralai/Mistral-7B-v0.1
license: apache-2.0
---
### Finetuning Overview:
**Model Used:** mistralai/Mistral-7B-v0.1
**Dataset:** cognitivecomputations/dolphin-coder
#### Dataset Insights:
[Dolphin-Coder](https://huggingface.co/datasets/cognitivecomputations/dolphin-coder) dataset – a high-quality collection of 100,000+ coding questions and responses. It's perfect for supervised fine-tuning (SFT), and teaching language models to improve on coding-based tasks.
#### Finetuning Details:
With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning:
- Was achieved with great cost-effectiveness.
- Completed in a total duration of 15hr 36mins for 1 epochs using an A6000 48GB GPU.
- Costed `$31.51` for the entire 1 epoch.
#### Hyperparameters & Additional Details:
- **Epochs:** 1
- **Cost Per Epoch:** $31.51
- **Model Path:** mistralai/Mistral-7B-v0.1
- **Learning Rate:** 0.0002
- **Data Split:** 100% train
- **Gradient Accumulation Steps:** 128
- **lora r:** 32
- **lora alpha:** 64

---
license: apache-2.0 |
monsterapi/llama2_7b_DolphinCoder | monsterapi | 2023-12-27T15:16:30Z | 4 | 0 | peft | [
"peft",
"code",
"instruct",
"llama2",
"dataset:cognitivecomputations/dolphin-coder",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:apache-2.0",
"region:us"
] | null | 2023-12-21T07:24:31Z | ---
library_name: peft
tags:
- code
- instruct
- llama2
datasets:
- cognitivecomputations/dolphin-coder
base_model: meta-llama/Llama-2-7b-hf
license: apache-2.0
---
### Finetuning Overview:
**Model Used:** meta-llama/Llama-2-7b-hf
**Dataset:** cognitivecomputations/dolphin-coder
#### Dataset Insights:
[Dolphin-Coder](https://huggingface.co/datasets/cognitivecomputations/dolphin-coder) dataset – a high-quality collection of 100,000+ coding questions and responses. It's perfect for supervised fine-tuning (SFT), and teaching language models to improve on coding-based tasks.
#### Finetuning Details:
With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning:
- Was achieved with great cost-effectiveness.
- Completed in a total duration of 15hr 31mins for 1 epochs using an A6000 48GB GPU.
- Costed `$30.64` for the entire 1 epoch.
#### Hyperparameters & Additional Details:
- **Epochs:** 1
- **Total Finetuning Cost:** $30.64
- **Model Path:** meta-llama/Llama-2-7b-hf
- **Learning Rate:** 0.0002
- **Data Split:** 100% train
- **Gradient Accumulation Steps:** 128
- **lora r:** 32
- **lora alpha:** 64

---
license: apache-2.0 |
Tobius/lugandawav2vec | Tobius | 2023-12-27T15:13:14Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"lg",
"dataset:tericlabs",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-12-27T13:21:55Z | ---
language:
- lg
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- tericlabs
metrics:
- wer
model-index:
- name: Whisper Small ganda
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Yogera data
type: tericlabs
config: lg
split: test
args: lg
metrics:
- name: Wer
type: wer
value: 54.276315789473685
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ganda
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Yogera data dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4937
- Wer: 54.2763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.9882 | 26.0 | 500 | 1.4647 | 54.9342 |
| 0.0026 | 52.0 | 1000 | 1.3967 | 60.8553 |
| 0.0002 | 78.0 | 1500 | 1.4295 | 57.8947 |
| 0.0001 | 105.0 | 2000 | 1.4494 | 58.2237 |
| 0.0001 | 131.0 | 2500 | 1.4713 | 53.9474 |
| 0.0001 | 157.0 | 3000 | 1.4835 | 54.2763 |
| 0.0001 | 184.0 | 3500 | 1.4908 | 54.2763 |
| 0.0001 | 210.0 | 4000 | 1.4937 | 54.2763 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
judithrosell/SciBERT_CRAFT_NER_new | judithrosell | 2023-12-27T15:01:14Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:allenai/scibert_scivocab_uncased",
"base_model:finetune:allenai/scibert_scivocab_uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-27T14:40:04Z | ---
base_model: allenai/scibert_scivocab_uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: SciBERT_CRAFT_NER_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT_CRAFT_NER_new
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1199
- Precision: 0.9743
- Recall: 0.9761
- F1: 0.9752
- Accuracy: 0.9740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1537 | 1.0 | 695 | 0.1140 | 0.9707 | 0.9727 | 0.9717 | 0.9704 |
| 0.0452 | 2.0 | 1390 | 0.1128 | 0.9733 | 0.9750 | 0.9741 | 0.9731 |
| 0.0185 | 3.0 | 2085 | 0.1199 | 0.9743 | 0.9761 | 0.9752 | 0.9740 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
ccore/FT_512 | ccore | 2023-12-27T14:57:21Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2023-12-27T14:25:40Z | ---
license: mit
---
# Conversational Language Model Interface using FASTTEXT
This project provides a Command Line Interface (CLI) for interacting with a FastText language model, enabling users to generate text sequences based on their input. The script allows customization of parameters such as temperature, input text, top-k predictions, and model file path.
## Installation
Before running the script, ensure you have Python installed on your system. Additionally, you'll need to install the FastText library:
## Colab
[Google Colab Notebook](https://colab.research.google.com/drive/1jX1NShX7MzJnuL2whHNOA39Xu-meQ1ap?usp=sharing)
```bash
pip install fasttext
```
## Usage
To use the script, you should first obtain or train a FastText model. Place the model file (usually with a `.bin` extension) in a known directory.
The script can be executed with various command-line arguments to specify the behavior:
```python
import argparse
import fasttext
import numpy as np
def apply_repetition_penalty(labels, probabilities, used_labels, penalty_scale=1.9):
"""
Applies a repetition penalty to reduce the probability of already used labels.
:param labels: List of possible labels.
:param probabilities: Corresponding list of probabilities.
:param used_labels: Set of labels that have already been used.
:param penalty_scale: Scale of the penalty to be applied.
:return: Adjusted probabilities.
"""
adjusted_probabilities = probabilities.copy()
for i, label in enumerate(labels):
if label in used_labels:
adjusted_probabilities[i] /= penalty_scale
# Normalize the probabilities to sum to 1 again
adjusted_probabilities /= adjusted_probabilities.sum()
return adjusted_probabilities
def predict_sequence(model, text, sequence_length=20, temperature=.5, penalty_scale=1.9):
"""
Generates a sequence of labels using the FastText model with repetition penalty.
:param model: Loaded FastText model.
:param text: Initial text to start the prediction from.
:param sequence_length: Desired length of the sequence.
:param temperature: Temperature for sampling.
:param penalty_scale: Scale of repetition penalty.
:return: List of predicted labels.
"""
used_labels = set()
sequence = []
for _ in range(sequence_length):
# Predict the top k most probable labels
labels, probabilities = model.predict(text, k=40)
labels = [label.replace('__label__', '') for label in labels]
probabilities = np.array(probabilities)
# Adjust the probabilities with repetition penalty
probabilities = apply_repetition_penalty(labels, probabilities, used_labels, penalty_scale)
# Sampling according to the adjusted probabilities
label_index = np.random.choice(range(len(labels)), p=probabilities)
chosen_label = labels[label_index]
# Add the chosen label to the sequence and to the set of used labels
sequence.append(chosen_label)
used_labels.add(chosen_label)
# Update the text with the chosen label for the next prediction
text += ' ' + chosen_label
return sequence
def generate_response(model, input_text, sequence_length=512, temperature=.5, penalty_scale=1.9):
generated_sequence = predict_sequence(model, input_text, sequence_length, temperature, penalty_scale)
return ' '.join(generated_sequence)
def main():
parser = argparse.ArgumentParser(description="Run the language model with specified parameters.")
parser.add_argument('-t', '--temperature', type=float, default=0.5, help='Temperature for sampling.')
parser.add_argument('-f', '--file', type=str, help='File containing input text.')
parser.add_argument('-p', '--text', type=str, help='Direct input text.')
parser.add_argument('-n', '--length', type=int, default=50, help='length predictions to consider.')
parser.add_argument('-m', '--model', type=str, required=True, help='Address of the FastText model file.')
args = parser.parse_args()
# Load the model
model = fasttext.load_model(args.model)
input_text = ''
if args.file:
with open(args.file, 'r') as file:
input_text = file.read()
elif args.text:
input_text = args.text
else:
print("No input text provided. Please use -f to specify a file or -p for direct text input.")
return
# Generate and print the response
response = generate_response(model, input_text + " [RESPONSE]", sequence_length=args.length, temperature=args.temperature)
print("\nResponse:")
print(response)
if __name__ == "__main__":
main()
```
```bash
python conversation_app.py -t TEMPERATURE -f FILE -p TEXT -k TOPK -m MODEL_PATH
```
- `-t TEMPERATURE` or `--temperature TEMPERATURE`: Sets the temperature for predictions. A higher temperature results in more diverse results. Default is 0.5.
- `-f FILE` or `--file FILE`: Specifies a path to a file containing input text. The script will read this file and use its contents as input.
- `-p TEXT` or `--text TEXT`: Directly provide the input text as a string.
- `-n LENGTH` or `--length TOPK`: Determines the number of top predictions to consider for the model's output. Default is 50.
- `-m MODEL_PATH` or `--model MODEL_PATH`: The path to the FastText model file (required).
### Example
```bash
python conversation_app.py -t 0.7 -p "What is the future of AI?" -n 40 -m /path/to/model.bin
```
This command sets the temperature to 0.7, uses the provided question as input, considers the top 40 predictions, and specifies the model file path.
## Note
- The script's output depends on the quality and training of the FastText model used.
- Ensure the specified model file path and input file path (if used) are correct.
|
gms0817/mistral-ehrs-transcription-summary-v2 | gms0817 | 2023-12-27T14:57:11Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T14:05:23Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: mistral-ehrs-transcription-summary-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-ehrs-transcription-summary-v2
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1798 | 0.17 | 25 | 0.9513 |
| 0.8126 | 0.34 | 50 | 0.8061 |
| 0.6553 | 0.51 | 75 | 0.7364 |
| 0.5837 | 0.68 | 100 | 0.6550 |
| 0.4432 | 0.85 | 125 | 0.6435 |
| 0.4879 | 1.02 | 150 | 0.6329 |
| 0.3297 | 1.19 | 175 | 0.6310 |
| 0.2869 | 1.36 | 200 | 0.6310 |
| 0.2945 | 1.53 | 225 | 0.6101 |
| 0.221 | 1.7 | 250 | 0.6160 |
| 0.2233 | 1.87 | 275 | 0.6117 |
| 0.242 | 2.04 | 300 | 0.6295 |
| 0.15 | 2.21 | 325 | 0.6303 |
| 0.1496 | 2.38 | 350 | 0.6544 |
| 0.206 | 2.55 | 375 | 0.6508 |
| 0.1516 | 2.72 | 400 | 0.6466 |
| 0.1409 | 2.89 | 425 | 0.6587 |
| 0.1396 | 3.06 | 450 | 0.6656 |
| 0.0848 | 3.23 | 475 | 0.6747 |
| 0.1037 | 3.4 | 500 | 0.6734 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
udtsealusagi/kor-news-ner | udtsealusagi | 2023-12-27T14:55:39Z | 0 | 0 | null | [
"ko",
"region:us"
] | null | 2023-12-27T13:28:02Z | ---
language:
- ko
---
AI-HUB에서 제공하는 <낚시성 기사 탐지 데이터셋> 중 Part 1. 제목-본문 내용 불일치 자동생성 낚시성 텍스트에 대해 fine-tuning을 진행한 모델입니다.
hugging face의 "noahkim/KoBigBird-KoBart-News-Summarization"을 fine-tuning 했습니다.
Hyperparameters:
learning rate : 2e-5
epochs : 10
batch size : 8
seed : 42
|
Sarthak279/Lyrics-Generation | Sarthak279 | 2023-12-27T14:55:37Z | 10 | 1 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-27T14:53:46Z | ---
license: mit
base_model: gpt2-medium
tags:
- generated_from_keras_callback
model-index:
- name: Lyrics-Generation
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Lyrics-Generation
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.2131
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -730, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 3.2624 | 0 |
| 2.9734 | 1 |
| 2.7453 | 2 |
| 2.4606 | 3 |
| 2.2131 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
gehug/vit-base-patch16-224-finetuned-flower | gehug | 2023-12-27T14:48:51Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-27T14:38:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
|
ThunpitchaPin/my_awesome_asr_mind_model | ThunpitchaPin | 2023-12-27T14:47:47Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-12-27T14:08:04Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_asr_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_asr_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
anthony-eden/binary-cs-curriculum-classifier-v1 | anthony-eden | 2023-12-27T14:45:43Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-25T22:15:54Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: binary-cs-curriculum-classifier-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binary-cs-curriculum-classifier-v1
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 30
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.137 | 1.0 | 40 | 0.0012 |
| 0.0004 | 2.0 | 80 | 0.0001 |
| 0.0005 | 3.0 | 120 | 0.0001 |
| 0.0002 | 4.0 | 160 | 0.0001 |
| 0.0002 | 5.0 | 200 | 0.0001 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0.post101
- Datasets 2.14.6
- Tokenizers 0.13.3
|
ostapeno/flan-library-for-neo-1B_evol_metav2_ai1_debug_ai1_debug | ostapeno | 2023-12-27T14:42:12Z | 0 | 0 | null | [
"region:us"
] | null | 2023-12-27T14:36:24Z | Number of experts present in the library: 1
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| ai2_arc_ARC_Challenge_1_0_0_v1 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
Last updated on: 2023-12-27 14:42:11+00:00
|
stablediffusionapi/deliberate-v2 | stablediffusionapi | 2023-12-27T14:32:56Z | 591 | 11 | diffusers | [
"diffusers",
"safetensors",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-03-27T14:45:13Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# model_name API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "deliberate-v2"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/deliberate-v2)
Model link: [View model](https://modelslab.com/models/deliberate-v2)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "deliberate-v2",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
ostapeno/flan-library-for-neo-1B_evol_metav2_ai1_debug | ostapeno | 2023-12-27T14:24:32Z | 0 | 0 | null | [
"region:us"
] | null | 2023-12-27T13:58:52Z | Number of experts present in the library: 1
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| ai2_arc_ARC_Challenge_1_0_0 | EleutherAI/gpt-neo-1.3B | sordonia/adauni-v3-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
Last updated on: 2023-12-27 14:24:30+00:00
|
EExe/a2c-PandaReachDense-v2 | EExe | 2023-12-27T14:15:03Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-27T09:47:12Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.48 +/- 0.45
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) |
rabil/CodeNinja-1.0-OpenChat-7B-llamafile | rabil | 2023-12-27T14:14:59Z | 4 | 1 | null | [
"llamafile",
"region:us"
] | null | 2023-12-27T14:13:16Z |
## CodeNinja-1.0-OpenChat-7B-llamafile
llamafile lets you distribute and run LLMs with a single file. [announcement blog post](https://hacks.mozilla.org/2023/11/introducing-llamafile/)
#### Downloads
- [codeninja-1.0-openchat-7b.Q4_K_M-server.llamafile](https://huggingface.co/rabil/CodeNinja-1.0-OpenChat-7B-llamafile/resolve/main/codeninja-1.0-openchat-7b.Q4_K_M-server.llamafile)
This repository was created using the [llamafile-builder](https://github.com/rabilrbl/llamafile-builder)
|
jameswatanabegoogle2024/Reinforce-model-id1 | jameswatanabegoogle2024 | 2023-12-27T14:11:53Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-27T14:11:27Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-model-id1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rabil/mistral-ft-optimized-1218-llamafile | rabil | 2023-12-27T14:09:43Z | 8 | 0 | null | [
"llamafile",
"region:us"
] | null | 2023-12-27T14:05:49Z |
## mistral-ft-optimized-1218-llamafile
llamafile lets you distribute and run LLMs with a single file. [announcement blog post](https://hacks.mozilla.org/2023/11/introducing-llamafile/)
#### Downloads
- [mistral-ft-optimized-1218.Q4_K_M-server.llamafile](https://huggingface.co/rabil/mistral-ft-optimized-1218-llamafile/resolve/main/mistral-ft-optimized-1218.Q4_K_M-server.llamafile)
- [mistral-ft-optimized-1218.Q5_K_M-server.llamafile](https://huggingface.co/rabil/mistral-ft-optimized-1218-llamafile/resolve/main/mistral-ft-optimized-1218.Q5_K_M-server.llamafile)
- [mistral-ft-optimized-1218.Q8_0-server.llamafile](https://huggingface.co/rabil/mistral-ft-optimized-1218-llamafile/resolve/main/mistral-ft-optimized-1218.Q8_0-server.llamafile)
This repository was created using the [llamafile-builder](https://github.com/rabilrbl/llamafile-builder)
|
xyz2zyx/ppo-Huggy | xyz2zyx | 2023-12-27T14:08:54Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-12-27T14:08:48Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: xyz2zyx/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ruizhaocv/MotionDirector | ruizhaocv | 2023-12-27T14:01:54Z | 0 | 8 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-12-07T12:40:48Z | ---
license: apache-2.0
---
git lfs clone https://huggingface.co/ruizhaocv/MotionDirector
For more details, please refer to the github repo: https://github.com/showlab/MotionDirector |
OpenNMT/Mistral-7B-v0.2-instruct-onmt-awq-gemv | OpenNMT | 2023-12-27T13:55:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-11-29T17:22:20Z | ---
license: apache-2.0
---
This is the OpenNMT-py converted version of Mistral 7b Instruct v0.1, 4-bit AWQ quantized.
The safetensors file is 4.2GB hence runs smoothly on any RTX card.
Command line to run is:
```
python onmt/bin/translate.py --config /pathto/mistral-instruct-inference-awq.yaml --src /pathto/input-vicuna.txt --output /pathto/mistral-output.txt
```
Where for instance, input-vicuna.txt contains:
USER:⦅newline⦆Show me some attractions in Boston.⦅newline⦆⦅newline⦆ASSISTANT:⦅newline⦆
Output will be:
```
Boston is a great city with many attractions to visit. Here are some popular ones:⦅newline⦆⦅newline⦆1. The Freedom Trail - This is a 2.5-mile-long path through downtown Boston that passes by 16 historically significant sites related to the American Revolution.⦅newline⦆2. The Massachusetts State House - The iconic red brick building that serves as the seat of the Massachusetts government and is home to the Massachusetts Legislature and the Governor.⦅newline⦆3. The Boston Tea Party Ships and Museum - This museum tells the story of the Boston Tea Party and the events leading up to the American Revolution through interactive exhibits and live reenactments.⦅newline⦆4. The Paul Revere House - The oldest house in the United States, it was home to the famous silversmith and patriot Paul Revere.⦅newline⦆5. The USS Constitution - A historic warship that played a key role in the American Revolution, the USS Constitution is now a museum and a popular tourist attraction.⦅newline⦆6. The Bunker Hill Memorial Park - A beautiful park located on the site of the first military engagement of the American Revolution, the Battle of Bunker Hill.⦅newline⦆7. The Museum of Fine Arts, Boston - One of the largest
```
If you run with a batch size of 60 you can get a nice throughput even with GEMV:
```
[2023-12-27 14:54:47,967 INFO] Loading checkpoint from /mnt/InternalCrucial4/dataAI/mistral-7B/mistral-instruct-v0.2/mistral-instruct-v0.2-onmt-awq-gemv.pt
[2023-12-27 14:54:48,063 INFO] awq_gemv compression of layer ['w_1', 'w_2', 'w_3', 'linear_values', 'linear_query', 'linear_keys', 'final_linear']
[2023-12-27 14:54:52,059 INFO] Loading data into the model
step0 time: 1.2714881896972656
[2023-12-27 14:54:59,180 INFO] PRED SCORE: -0.2316, PRED PPL: 1.26 NB SENTENCES: 59
[2023-12-27 14:54:59,180 INFO] Total translation time (s): 6.1
[2023-12-27 14:54:59,180 INFO] Average translation time (ms): 103.5
[2023-12-27 14:54:59,180 INFO] Tokens per second: 2183.8
Time w/o python interpreter load/terminate: 11.222625255584717
```
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_SystemError1.0_Seed105 | behzadnet | 2023-12-27T13:54:26Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-27T13:54:22Z | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_SystemError1.0_Seed105 | behzadnet | 2023-12-27T13:54:14Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-27T13:54:06Z | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
Sloba/RRLearn-Taxi-v3-3 | Sloba | 2023-12-27T13:43:38Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-27T13:43:36Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RRLearn-Taxi-v3-3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Sloba/RRLearn-Taxi-v3-3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
am-infoweb/classification_test_27Dec | am-infoweb | 2023-12-27T13:43:26Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T11:05:03Z | ---
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: classification_test_14Dec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classification_test_14Dec
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3279
- Accuracy: 0.9493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4495 | 1.0 | 5293 | 0.6105 | 0.8841 |
| 0.2916 | 2.0 | 10586 | 0.4680 | 0.9294 |
| 0.3098 | 3.0 | 15879 | 0.4065 | 0.9419 |
| 0.2335 | 4.0 | 21172 | 0.3796 | 0.9430 |
| 0.1565 | 5.0 | 26465 | 0.3467 | 0.9428 |
| 0.1886 | 6.0 | 31758 | 0.3174 | 0.9487 |
| 0.1042 | 7.0 | 37051 | 0.3279 | 0.9493 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
razeruwu/prigozhinJenya | razeruwu | 2023-12-27T13:40:25Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2023-12-27T13:32:24Z | ---
license: other
license_name: evgeniy
license_link: LICENSE
---
|
Sloba/RRLearn-Taxi-v3 | Sloba | 2023-12-27T13:34:29Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-27T13:34:27Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RRLearn-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Sloba/RRLearn-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
renardkorzeniowski/distilbert-base-uncased-finetuned-cola | renardkorzeniowski | 2023-12-27T13:29:32Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T12:01:37Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5964
- eval_matthews_correlation: 0.0
- eval_runtime: 36.4296
- eval_samples_per_second: 28.631
- eval_steps_per_second: 1.812
- epoch: 0.09
- step: 50
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
DucPham1501/finetuning-sentiment-model-3000-samples | DucPham1501 | 2023-12-27T12:52:30Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T12:45:19Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3393
- Accuracy: 0.8667
- F1: 0.8693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
snoop088/testing_fine_tune_qa | snoop088 | 2023-12-27T12:44:33Z | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloom-3b",
"base_model:adapter:bigscience/bloom-3b",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-12-24T14:34:57Z | ---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloom-3b
model-index:
- name: testing_fine_tune_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testing_fine_tune_qa
This model is a fine-tuned version of [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9148 | 0.4 | 200 | 1.6602 |
| 1.6726 | 0.8 | 400 | 1.3506 |
| 1.0625 | 1.2 | 600 | 1.2383 |
| 0.8001 | 1.6 | 800 | 1.1885 |
| 0.3615 | 2.0 | 1000 | 1.1709 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0 |
LoneStriker/goliath-120b-2.9bpw-h6-exl2 | LoneStriker | 2023-12-27T12:41:28Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-27T12:23:59Z | ---
license: llama2
language:
- en
pipeline_tag: conversational
---
# Goliath 120B
An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one.
Please check out the quantized formats provided by [@TheBloke](https:///huggingface.co/TheBloke) and [@Panchovix](https://huggingface.co/Panchovix):
- [GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) (llama.cpp)
- [GPTQ](https://huggingface.co/TheBloke/goliath-120b-GPTQ) (KoboldAI, TGW, Aphrodite)
- [AWQ](https://huggingface.co/TheBloke/goliath-120b-AWQ) (TGW, Aphrodite, vLLM)
- [Exllamav2](https://huggingface.co/Panchovix/goliath-120b-exl2) (TGW, KoboldAI)
# Prompting Format
Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best.
# Merge process
The models used in the merge are [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [Euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B).
The layer ranges used are as follows:
```yaml
- range 0, 16
Xwin
- range 8, 24
Euryale
- range 17, 32
Xwin
- range 25, 40
Euryale
- range 33, 48
Xwin
- range 41, 56
Euryale
- range 49, 64
Xwin
- range 57, 72
Euryale
- range 65, 80
Xwin
```
# Screenshots

# Benchmarks
Coming soon.
# Acknowledgements
Credits goes to [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit).
Special thanks to [@Undi95](https://huggingface.co/Undi95) for helping with the merge ratios. |
Weiming1122/Reinforce-Pixelcopter-PLE-v0 | Weiming1122 | 2023-12-27T12:36:53Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-27T05:45:39Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 32.90 +/- 26.62
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny | BramVanroy | 2023-12-27T12:31:50Z | 130 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"lora",
"adapters",
"nl",
"dataset:yhavinga/mc4_nl_cleaned",
"arxiv:2312.12852",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-10T05:21:50Z | ---
license: apache-2.0
base_model: meta-llama/Llama-2-13b-hf
tags:
- generated_from_trainer
- llama
- lora
- adapters
datasets:
- yhavinga/mc4_nl_cleaned
language:
- nl
model-index:
- name: llama2-13b-ft-mc4_nl_cleaned_tiny
results: []
---
# llama2-13b-ft-mc4_nl_cleaned_tiny
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
on the [yhavinga/mc4_nl_cleaned](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned/viewer/tiny/train) dataset (`tiny` partition) on a context of 4096 tokens.
See the original [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) for more information, intended use, and biases.
If you use this model or refer to it, please use the following citation:
Vanroy, B. (2023). *Language Resources for Dutch Large Language Modelling*. [https://arxiv.org/abs/2312.12852](https://arxiv.org/abs/2312.12852)
```bibtext
@article{vanroy2023language,
title={Language Resources for {Dutch} Large Language Modelling},
author={Vanroy, Bram},
journal={arXiv preprint arXiv:2312.12852},
year={2023}
}
```
## Intended uses & limitations
While Llama 2 already contains some proficiency in Dutch, this finetune is intended to improve the fluency of Dutch (not increase its knowledge). It is therefore
intended as a generative model for Dutch language. The biases, shortcomings and intended uses are otherwise the same as those of
the [original model]((https://huggingface.co/meta-llama/Llama-2-13b-hf)). The model can be used for generative tasks or finetuned further on other tasks
such as summarization, adaptation, instruction or chat finetuning.
## Training and evaluation data
Trained on the [yhavinga/mc4_nl_cleaned](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned/viewer/tiny/train) dataset (`tiny` partition) for one epoch. The canonical
validation split was not used but instead 5% of `train` was used as validation.
## Training procedure
Trained with LoRA targetting `["q_proj", "v_proj"]` in 4 bit and merged before upload. Trained with Flash Attention as borrowed from
[here](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/utils/llama_patch.py).
The adapters are in the `adapters` branch.
Initial training investigation on the Tier-1 HPC of [Vlaams Supercomputer Centrum (VSC)](https://www.vscentrum.be/) and training on our own server of 4x 3090s.
### Training hyperparameters
The following hyperparameters were used during training in the HPC investigation:
- learning_rate: 0.0003
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 6
- total_train_batch_size: 1152
- total_eval_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8784 | 0.09 | 90 | 1.8820 |
| 1.8344 | 0.19 | 180 | 1.8542 |
| 1.8351 | 0.28 | 270 | 1.8355 |
| 1.8206 | 0.37 | 360 | 1.8212 |
| 1.8021 | 0.47 | 450 | 1.8088 |
| 1.8102 | 0.56 | 540 | 1.7982 |
| 1.7991 | 0.65 | 630 | 1.7890 |
| 1.7788 | 0.74 | 720 | 1.7811 |
| 1.7915 | 0.84 | 810 | 1.7742 |
| 1.7715 | 0.93 | 900 | 1.7676 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BramVanroy__llama2-13b-ft-mc4_nl_cleaned_tiny)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 46.81 |
| ARC (25-shot) | 59.3 |
| HellaSwag (10-shot) | 82.04 |
| MMLU (5-shot) | 54.67 |
| TruthfulQA (0-shot) | 38.03 |
| Winogrande (5-shot) | 77.27 |
| GSM8K (5-shot) | 10.31 |
| DROP (3-shot) | 6.08 |
|
wenjun123/token_calssification_model | wenjun123 | 2023-12-27T12:30:18Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-27T07:56:04Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: wenjun123/token_calssification_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wenjun123/token_calssification_model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0468
- Validation Loss: 0.0531
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1741 | 0.0639 | 0 |
| 0.0468 | 0.0531 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Tokenizers 0.15.0
|
andrijdavid/Mistral-T5-7B-v1-GGUF | andrijdavid | 2023-12-27T12:18:22Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"GGUF",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-27T12:18:18Z | ---
license: apache-2.0
tags:
- GGUF
quantized_by: andrijdavid
---
# Mistral-T5-7B-v1-GGUF
- Original model: [Mistral-T5-7B-v1](https://huggingface.co/ignos/Mistral-T5-7B-v1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Mistral-T5-7B-v1](https://huggingface.co/ignos/Mistral-T5-7B-v1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: andrijdavid/Mistral-T5-7B-v1-GGUF and below it, a specific filename to download, such as: Mistral-T5-7B-v1-f16.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download andrijdavid/Mistral-T5-7B-v1-GGUF Mistral-T5-7B-v1-f16.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download andrijdavid/Mistral-T5-7B-v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download andrijdavid/Mistral-T5-7B-v1-GGUF Mistral-T5-7B-v1-f16.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Mistral-T5-7B-v1-f16.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Mistral-T5-7B-v1-f16.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Mistral-T5-7B-v1-f16.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Mistral-T5-7B-v1
# Model Card for Model ID
This model is a finetuning of [Toten5/Marcoroni-neural-chat-7B-v2](https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v2)
## Model Details
### Model Description
- **Developed by:** Ignos
- **Model type:** Mistral
- **License:** Apache-2.0
## Uses
Model created to improve instructional behavior.
## Bias, Risks, and Limitations
The same bias, risks and limitations from base models.
## Training Details
### Training Data
- [tatsu-lab/alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca)
### Training Procedure
- Training with QLoRA approach and merging with base model.
### Results
- Huggingface evaluation pending
#### Summary
## Technical Specifications
### Model Architecture and Objective
- Models based on Mistral Architecture
### Compute Infrastructure
- Training on RunPod
#### Hardware
- 3 x RTX 4090
- 48 vCPU 377 GB RAM
#### Software
- Axolotl 0.3.0
### Framework versions
- PEFT 0.6.0
<!-- original-model-card end --> |
judithrosell/PubMedBERT_CRAFT_NER_new | judithrosell | 2023-12-27T12:14:13Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-27T11:58:23Z | ---
license: mit
base_model: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: PubMedBERT_CRAFT_NER_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PubMedBERT_CRAFT_NER_new
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1034
- Precision: 0.9811
- Recall: 0.9782
- F1: 0.9797
- Accuracy: 0.9751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2176 | 1.0 | 695 | 0.1101 | 0.9780 | 0.9739 | 0.9759 | 0.9708 |
| 0.0555 | 2.0 | 1390 | 0.1019 | 0.9800 | 0.9770 | 0.9785 | 0.9739 |
| 0.0283 | 3.0 | 2085 | 0.1034 | 0.9811 | 0.9782 | 0.9797 | 0.9751 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
trl-lib/OpenHermes-2-Mistral-7B-sigmoid-beta-0.7-steps-800 | trl-lib | 2023-12-27T12:11:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T12:10:50Z | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-sigmoid-beta-0.7-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
trl-lib/OpenHermes-2-Mistral-7B-sigmoid-beta-0.1-steps-800 | trl-lib | 2023-12-27T12:09:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T12:09:25Z | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-sigmoid-beta-0.1-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
trl-lib/OpenHermes-2-Mistral-7B-kto-beta-0.9-steps-800 | trl-lib | 2023-12-27T12:09:25Z | 2 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T12:09:12Z | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-kto-beta-0.9-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
trl-lib/OpenHermes-2-Mistral-7B-kto-beta-0.8-steps-800 | trl-lib | 2023-12-27T12:09:09Z | 1 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T12:08:55Z | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-kto-beta-0.8-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
trl-lib/OpenHermes-2-Mistral-7B-kto-beta-0.7-steps-800 | trl-lib | 2023-12-27T12:08:54Z | 1 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T12:08:41Z | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-kto-beta-0.7-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
trl-lib/OpenHermes-2-Mistral-7B-kto-beta-0.4-steps-800 | trl-lib | 2023-12-27T12:07:47Z | 1 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T12:07:34Z | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-kto-beta-0.4-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
ku-nlp/gpt2-large-japanese-char | ku-nlp | 2023-12-27T12:07:30Z | 63 | 5 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:oscar",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-27T11:18:45Z | ---
language: ja
license: cc-by-sa-4.0
library_name: transformers
tags:
- gpt2
datasets:
- wikipedia
- cc100
- oscar
widget:
- text: "<s>昨日私は京都で"
---
# Model Card for Japanese character-level GPT-2 Large
## Model description
This is a Japanese character-level GPT-2 Large (717M parameters) language model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
## How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='ku-nlp/gpt2-large-japanese-char')
>>> set_seed(5)
>>> generator("<s>昨日私は京都で", max_length=30, do_sample=True, num_return_sequences=5)
[{'generated_text': '<s>昨日私は京都で仕事だったのですが、帰りは車を信号で止めて、'},
{'generated_text': '<s>昨日私は京都で開かれた大阪市都市戦略会議に出席しました。そ'},
{'generated_text': '<s>昨日私は京都で行われました関西の教育者・学校事例が集まるイ'},
{'generated_text': '<s>昨日私は京都では初雪を見ました。朝は少しパッとしない天気で'},
{'generated_text': '<s>昨日私は京都でこみっくトレジャーさんの撮影を見学させていた'}]
```
You can also use this model to get the features of a given text.
## Vocabulary
A character-level vocabulary of size 6K is used. To be precise, rare characters may be split into bytes because byte-level byte-pair encoding (BPE) is used. The BPE tokenizer was trained on a small subset of the training data. Since the data were converted into a one-character-per-line format, merge operations never go beyond character boundaries.
Note that the tokenizer maps U+0020 to `[UNK]` because preprocessing eliminated whitespace characters (U+0020) from training data. Use U+3000 (Ideographic Space) instead.
## Training data
We used the following corpora for pre-training:
- Japanese Wikipedia (as of 20221020, 3.2GB, 27M sentences, 1.3M documents)
- Japanese portion of CC-100 (85GB, 619M sentences, 66M documents)
- Japanese portion of OSCAR (54GB, 326M sentences, 25M documents)
Note that we filtered out documents annotated with "header", "footer", or "noisy" tags in OSCAR.
Also note that Japanese Wikipedia was duplicated 10 times to make the total size of the corpus comparable to that of CC-100 and OSCAR. As a result, the total size of the training data is 171GB.
## Training procedure
The training took about 8 months (with 7 interruptions) with a single NVIDIA A100 80GB GPU.
The following hyperparameters were used during pre-training:
- learning_rate: 2e-4
- per_device_train_batch_size: 6
- gradient_accumulation_steps: 98
- optimizer: AdamW with betas=(0.9, 0.999) and epsilon=1e-06
- weight_decay: 0.01
- lr_scheduler_type: linear
- max_grad_norm: 1.0
- max_steps: 500,000 (but terminated at 186,000 steps ~= 2.0 epochs)
- warmup_steps: 10,000
The eval loss was 1.309 while the eval accuracy was 0.6890. The evaluation set consists of 5,000 randomly sampled documents from each of the training corpora.
|
trl-lib/OpenHermes-2-Mistral-7B-kto-beta-0.2-steps-800 | trl-lib | 2023-12-27T12:07:04Z | 3 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T12:06:46Z | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-kto-beta-0.2-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.9-steps-800 | trl-lib | 2023-12-27T12:06:31Z | 2 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T12:06:17Z | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-ipo-beta-0.9-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
SamSJackson/Reinforce-CartPole-v1 | SamSJackson | 2023-12-27T12:06:28Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-27T12:06:18Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.8-steps-800 | trl-lib | 2023-12-27T12:06:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T12:06:04Z | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-ipo-beta-0.8-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.4-steps-800 | trl-lib | 2023-12-27T12:05:18Z | 1 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T12:05:03Z | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-ipo-beta-0.4-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.3-steps-800 | trl-lib | 2023-12-27T12:05:03Z | 3 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T12:04:42Z | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-ipo-beta-0.3-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.2-steps-800 | trl-lib | 2023-12-27T12:04:32Z | 1 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2023-12-27T12:04:10Z | ---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: OpenHermes-2-Mistral-7B-ipo-beta-0.2-steps-800
results: []
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
livingbox/vintage-model | livingbox | 2023-12-27T12:02:48Z | 0 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-27T11:59:16Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Vintage-model Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
kghanlon/distilbert-base-uncased-finetuned-MP-unannotated-half-frozen-v1-RILE-v1_fully_frozen | kghanlon | 2023-12-27T12:01:05Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:kghanlon/distilbert-base-uncased-finetuned-MP-unannotated-half-frozen-v1",
"base_model:finetune:kghanlon/distilbert-base-uncased-finetuned-MP-unannotated-half-frozen-v1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T11:39:22Z | ---
base_model: kghanlon/distilbert-base-uncased-finetuned-MP-unannotated-half-frozen-v1
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
model-index:
- name: distilbert-base-uncased-finetuned-MP-unannotated-half-frozen-v1-RILE-v1_fully_frozen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-MP-unannotated-half-frozen-v1-RILE-v1_fully_frozen
This model is a fine-tuned version of [kghanlon/distilbert-base-uncased-finetuned-MP-unannotated-half-frozen-v1](https://huggingface.co/kghanlon/distilbert-base-uncased-finetuned-MP-unannotated-half-frozen-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7869
- Accuracy: 0.6445
- Recall: 0.6445
- F1: 0.6397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|
| 0.8369 | 1.0 | 15490 | 0.8333 | 0.6152 | 0.6152 | 0.6068 |
| 0.8373 | 2.0 | 30980 | 0.8094 | 0.6324 | 0.6324 | 0.6311 |
| 0.822 | 3.0 | 46470 | 0.8024 | 0.6361 | 0.6361 | 0.6279 |
| 0.7985 | 4.0 | 61960 | 0.7927 | 0.6425 | 0.6425 | 0.6345 |
| 0.7929 | 5.0 | 77450 | 0.7869 | 0.6445 | 0.6445 | 0.6397 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
ytu-ce-cosmos/turkish-base-bert-uncased | ytu-ce-cosmos | 2023-12-27T11:56:00Z | 106 | 14 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"pretraining",
"Turkish",
"turkish",
"fill-mask",
"tr",
"arxiv:2307.14134",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-29T21:59:38Z | ---
widget:
- text: "gelirken bir litre [MASK] aldım."
example_title: "Örnek 1"
pipeline_tag: fill-mask
tags:
- Turkish
- turkish
language:
- tr
---
# turkish-base-bert-uncased
This is a Turkish Base uncased BERT model. Since this model is uncased: it does not make a difference between turkish and Turkish.
#### ⚠ Uncased use requires manual lowercase conversion
**Don't** use the `do_lower_case = True` flag with the tokenizer. Instead, convert your text to lower case as follows:
```python
text.replace("I", "ı").lower()
```
This is due to a [known issue](https://github.com/huggingface/transformers/issues/6680) with the tokenizer.
Be aware that this model may exhibit biased predictions as it was trained primarily on crawled data, which inherently can contain various biases.
Other relevant information can be found in the [paper](https://arxiv.org/abs/2307.14134).
## Example Usage
```python
from transformers import AutoTokenizer, BertForMaskedLM
from transformers import pipeline
model = BertForMaskedLM.from_pretrained("ytu-ce-cosmos/turkish-base-bert-uncased")
# or
# model = BertForMaskedLM.from_pretrained("ytu-ce-cosmos/turkish-base-bert-uncased", from_tf = True)
tokenizer = AutoTokenizer.from_pretrained("ytu-ce-cosmos/turkish-base-bert-uncased")
unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
unmasker("gelirken bir litre [MASK] aldım.")
[{'score': 0.6248273253440857,
'token': 2417,
'token_str': 'su',
'sequence': 'gelirken bir litre su aldım.'},
{'score': 0.10369712114334106,
'token': 2168,
'token_str': 'daha',
'sequence': 'gelirken bir litre daha aldım.'},
{'score': 0.06832519918680191,
'token': 11818,
'token_str': 'benzin',
'sequence': 'gelirken bir litre benzin aldım.'},
{'score': 0.027739914134144783,
'token': 11973,
'token_str': 'bira',
'sequence': 'gelirken bir litre bira aldım.'},
{'score': 0.02571810781955719,
'token': 7279,
'token_str': 'alkol',
'sequence': 'gelirken bir litre alkol aldım.'}]
```
# Acknowledgments
- Research supported with Cloud TPUs from [Google's TensorFlow Research Cloud](https://sites.research.google/trc/about/) (TFRC). Thanks for providing access to the TFRC ❤️
- Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗
# Citations
```bibtex
@article{kesgin2023developing,
title={Developing and Evaluating Tiny to Medium-Sized Turkish BERT Models},
author={Kesgin, Himmet Toprak and Yuce, Muzaffer Kaan and Amasyali, Mehmet Fatih},
journal={arXiv preprint arXiv:2307.14134},
year={2023}
}
```
# License
MIT
|
ytu-ce-cosmos/turkish-medium-bert-uncased | ytu-ce-cosmos | 2023-12-27T11:55:38Z | 16 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"pretraining",
"Turkish",
"turkish",
"fill-mask",
"tr",
"arxiv:2307.14134",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-29T17:11:57Z | ---
widget:
- text: "gelirken bir litre [MASK] aldım."
example_title: "Örnek 1"
pipeline_tag: fill-mask
tags:
- Turkish
- turkish
language:
- tr
---
# turkish-medium-bert-uncased
This is a Turkish Medium uncased BERT model, developed to fill the gap for small-sized BERT models for Turkish. Since this model is uncased: it does not make a difference between turkish and Turkish.
#### ⚠ Uncased use requires manual lowercase conversion
**Don't** use the `do_lower_case = True` flag with the tokenizer. Instead, convert your text to lower case as follows:
```python
text.replace("I", "ı").lower()
```
This is due to a [known issue](https://github.com/huggingface/transformers/issues/6680) with the tokenizer.
Be aware that this model may exhibit biased predictions as it was trained primarily on crawled data, which inherently can contain various biases.
Other relevant information can be found in the [paper](https://arxiv.org/abs/2307.14134).
## Example Usage
```python
from transformers import AutoTokenizer, BertForMaskedLM
from transformers import pipeline
model = BertForMaskedLM.from_pretrained("ytu-ce-cosmos/turkish-medium-bert-uncased")
# or
# model = BertForMaskedLM.from_pretrained("ytu-ce-cosmos/turkish-medium-bert-uncased", from_tf = True)
tokenizer = AutoTokenizer.from_pretrained("ytu-ce-cosmos/turkish-medium-bert-uncased")
unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
unmasker("gelirken bir litre [MASK] aldım.")
[{'score': 0.6158884763717651,
'token': 11818,
'token_str': 'benzin',
'sequence': 'gelirken bir litre benzin aldım.'},
{'score': 0.1580735594034195,
'token': 2417,
'token_str': 'su',
'sequence': 'gelirken bir litre su aldım.'},
{'score': 0.07746931910514832,
'token': 29480,
'token_str': 'mazot',
'sequence': 'gelirken bir litre mazot aldım.'},
{'score': 0.0339476652443409,
'token': 4521,
'token_str': 'süt',
'sequence': 'gelirken bir litre süt aldım.'},
{'score': 0.021608062088489532,
'token': 7279,
'token_str': 'alkol',
'sequence': 'gelirken bir litre alkol aldım.'}]
```
# Acknowledgments
- Research supported with Cloud TPUs from [Google's TensorFlow Research Cloud](https://sites.research.google/trc/about/) (TFRC). Thanks for providing access to the TFRC ❤️
- Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗
# Citations
```bibtex
@article{kesgin2023developing,
title={Developing and Evaluating Tiny to Medium-Sized Turkish BERT Models},
author={Kesgin, Himmet Toprak and Yuce, Muzaffer Kaan and Amasyali, Mehmet Fatih},
journal={arXiv preprint arXiv:2307.14134},
year={2023}
}
```
# License
MIT
|
ytu-ce-cosmos/turkish-small-bert-uncased | ytu-ce-cosmos | 2023-12-27T11:54:55Z | 22 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"pretraining",
"Turkish",
"turkish",
"fill-mask",
"tr",
"arxiv:2307.14134",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-29T17:02:37Z | ---
widget:
- text: "gelirken bir litre [MASK] aldım."
example_title: "Örnek 1"
pipeline_tag: fill-mask
tags:
- Turkish
- turkish
language:
- tr
---
# turkish-small-bert-uncased
This is a Turkish Small uncased BERT model, developed to fill the gap for small-sized BERT models for Turkish. Since this model is uncased: it does not make a difference between turkish and Turkish.
#### ⚠ Uncased use requires manual lowercase conversion
**Don't** use the `do_lower_case = True` flag with the tokenizer. Instead, convert your text to lower case as follows:
```python
text.replace("I", "ı").lower()
```
This is due to a [known issue](https://github.com/huggingface/transformers/issues/6680) with the tokenizer.
Be aware that this model may exhibit biased predictions as it was trained primarily on crawled data, which inherently can contain various biases.
Other relevant information can be found in the [paper](https://arxiv.org/abs/2307.14134).
## Example Usage
```python
from transformers import AutoTokenizer, BertForMaskedLM
from transformers import pipeline
model = BertForMaskedLM.from_pretrained("ytu-ce-cosmos/turkish-small-bert-uncased")
# or
# model = BertForMaskedLM.from_pretrained("ytu-ce-cosmos/turkish-small-bert-uncased", from_tf = True)
tokenizer = AutoTokenizer.from_pretrained("ytu-ce-cosmos/turkish-small-bert-uncased")
unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
unmasker("gelirken bir litre [MASK] aldım.")
[{'score': 0.3692811131477356,
'token': 2417,
'token_str': 'su',
'sequence': 'gelirken bir litre su aldım.'},
{'score': 0.2551537752151489,
'token': 11818,
'token_str': 'benzin',
'sequence': 'gelirken bir litre benzin aldım.'},
{'score': 0.036265160888433456,
'token': 29480,
'token_str': 'mazot',
'sequence': 'gelirken bir litre mazot aldım.'},
{'score': 0.03350532799959183,
'token': 4521,
'token_str': 'süt',
'sequence': 'gelirken bir litre süt aldım.'},
{'score': 0.02558029256761074,
'token': 2168,
'token_str': 'daha',
'sequence': 'gelirken bir litre daha aldım.'}]
```
# Acknowledgments
- Research supported with Cloud TPUs from [Google's TensorFlow Research Cloud](https://sites.research.google/trc/about/) (TFRC). Thanks for providing access to the TFRC ❤️
- Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗
# Citations
```bibtex
@article{kesgin2023developing,
title={Developing and Evaluating Tiny to Medium-Sized Turkish BERT Models},
author={Kesgin, Himmet Toprak and Yuce, Muzaffer Kaan and Amasyali, Mehmet Fatih},
journal={arXiv preprint arXiv:2307.14134},
year={2023}
}
```
# License
MIT
|
ytu-ce-cosmos/turkish-tiny-bert-uncased | ytu-ce-cosmos | 2023-12-27T11:53:53Z | 31 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"pretraining",
"Turkish",
"turkish",
"fill-mask",
"tr",
"arxiv:2307.14134",
"license:mit",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-29T14:02:27Z | ---
widget:
- pipeline_tag: Fill-Mask
- text: gelirken bir litre [MASK] aldım.
example_title: ürün
pipeline_tag: fill-mask
tags:
- Turkish
- turkish
license: mit
language:
- tr
---
# turkish-tiny-bert-uncased
This is a Turkish Tiny uncased BERT model, developed to fill the gap for small-sized BERT models for Turkish. Since this model is uncased: it does not make a difference between turkish and Turkish.
#### ⚠ Uncased use requires manual lowercase conversion
**Don't** use the `do_lower_case = True` flag with the tokenizer. Instead, convert your text to lower case as follows:
```python
text.replace("I", "ı").lower()
```
This is due to a [known issue](https://github.com/huggingface/transformers/issues/6680) with the tokenizer.
Be aware that this model may exhibit biased predictions as it was trained primarily on crawled data, which inherently can contain various biases.
Other relevant information can be found in the [paper](https://arxiv.org/abs/2307.14134).
## Example Usage
```python
from transformers import AutoTokenizer, BertForMaskedLM
from transformers import pipeline
model = BertForMaskedLM.from_pretrained("ytu-ce-cosmos/turkish-tiny-bert-uncased")
# or
# model = BertForMaskedLM.from_pretrained("ytu-ce-cosmos/turkish-tiny-bert-uncased", from_tf = True)
tokenizer = AutoTokenizer.from_pretrained("ytu-ce-cosmos/turkish-tiny-bert-uncased")
unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
unmasker("gelirken bir litre [MASK] aldım.")
# [{'score': 0.202457457780838,
# 'token': 2417,
# 'token_str': 'su',
# 'sequence': 'gelirken bir litre su aldım.'},
# {'score': 0.09290537238121033,
# 'token': 11818,
# 'token_str': 'benzin',
# 'sequence': 'gelirken bir litre benzin aldım.'},
# {'score': 0.07785643637180328,
# 'token': 2026,
# 'token_str': '##den',
# 'sequence': 'gelirken bir litreden aldım.'},
# {'score': 0.06889808923006058,
# 'token': 2299,
# 'token_str': '##yi',
# 'sequence': 'gelirken bir litreyi aldım.'},
# {'score': 0.03152570128440857,
# 'token': 2647,
# 'token_str': '##ye',
# 'sequence': 'gelirken bir litreye aldım.'}]
```
# Acknowledgments
- Research supported with Cloud TPUs from [Google's TensorFlow Research Cloud](https://sites.research.google/trc/about/) (TFRC). Thanks for providing access to the TFRC ❤️
- Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗
# Citations
```bibtex
@article{kesgin2023developing,
title={Developing and Evaluating Tiny to Medium-Sized Turkish BERT Models},
author={Kesgin, Himmet Toprak and Yuce, Muzaffer Kaan and Amasyali, Mehmet Fatih},
journal={arXiv preprint arXiv:2307.14134},
year={2023}
}
```
# License
MIT |
tunarebus/indonesian-roberta-base-posp-tagger-finetuned-tweet_pemilu2024_2 | tunarebus | 2023-12-27T11:49:47Z | 3 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"base_model:w11wo/indonesian-roberta-base-posp-tagger",
"base_model:finetune:w11wo/indonesian-roberta-base-posp-tagger",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-12-27T11:27:59Z | ---
license: mit
base_model: w11wo/indonesian-roberta-base-posp-tagger
tags:
- generated_from_keras_callback
model-index:
- name: tunarebus/indonesian-roberta-base-posp-tagger-finetuned-tweet_pemilu2024_2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tunarebus/indonesian-roberta-base-posp-tagger-finetuned-tweet_pemilu2024_2
This model is a fine-tuned version of [w11wo/indonesian-roberta-base-posp-tagger](https://huggingface.co/w11wo/indonesian-roberta-base-posp-tagger) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.7581
- Validation Loss: 6.4974
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -969, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 11.7200 | 11.3464 | 0 |
| 10.7101 | 10.0138 | 1 |
| 9.4860 | 9.0052 | 2 |
| 8.6505 | 8.1202 | 3 |
| 8.0255 | 7.6902 | 4 |
| 7.6350 | 7.2934 | 5 |
| 7.3084 | 7.0449 | 6 |
| 7.0715 | 6.9226 | 7 |
| 6.9394 | 6.6571 | 8 |
| 6.7581 | 6.4974 | 9 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
mtileria00/sentence-bert-finetuned-doc-classification | mtileria00 | 2023-12-27T11:47:07Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-12-27T11:40:16Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 570 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 228,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DanielDsouza/ppo-LunarLander-v2_8 | DanielDsouza | 2023-12-27T11:46:40Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-27T11:46:33Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -91.60 +/- 37.06
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'DanielDsouza/ppo-LunarLander-v2_8'
'batch_size': 512
'minibatch_size': 128}
```
|
adarsh2350/distilbert_uncased_imdb_pytorch | adarsh2350 | 2023-12-27T11:46:17Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T10:54:07Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert_uncased_imdb_pytorch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_uncased_imdb_pytorch
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2278
- Accuracy: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2261 | 1.0 | 1563 | 0.2104 | 0.9179 |
| 0.1476 | 2.0 | 3126 | 0.2278 | 0.9323 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
kghanlon/distilbert-base-uncased-RILE-v1_un_frozen | kghanlon | 2023-12-27T11:36:31Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T10:24:00Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
model-index:
- name: distilbert-base-uncased-RILE-v1_un_frozen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-RILE-v1_un_frozen
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4132
- Accuracy: 0.7341
- Recall: 0.7341
- F1: 0.7334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|
| 0.7018 | 1.0 | 15490 | 0.6802 | 0.7181 | 0.7181 | 0.7150 |
| 0.5938 | 2.0 | 30980 | 0.6689 | 0.7291 | 0.7291 | 0.7295 |
| 0.4566 | 3.0 | 46470 | 0.7686 | 0.7319 | 0.7319 | 0.7324 |
| 0.3122 | 4.0 | 61960 | 1.1036 | 0.7330 | 0.7330 | 0.7325 |
| 0.2418 | 5.0 | 77450 | 1.4132 | 0.7341 | 0.7341 | 0.7334 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
adityamavle/dqn-SpaceInvaders-v3 | adityamavle | 2023-12-27T11:19:28Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-27T11:19:11Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 92.00 +/- 32.57
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga adityamavle -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga adityamavle -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga adityamavle
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 1000000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 10000),
('n_timesteps', 100000),
('policy', 'CnnPolicy'),
('target_update_interval', 100),
('train_freq', 4),
('normalize', False)])
```
|
tandevstag/vi-fin-news | tandevstag | 2023-12-27T11:07:13Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:FPTAI/vibert-base-cased",
"base_model:finetune:FPTAI/vibert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T09:35:28Z | ---
base_model: FPTAI/vibert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vi-fin-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-fin-news
This model is a fine-tuned version of [FPTAI/vibert-base-cased](https://huggingface.co/FPTAI/vibert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4509
- Accuracy: 0.9136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1176 | 1.0 | 1150 | 0.3566 | 0.9181 |
| 0.0582 | 2.0 | 2300 | 0.4509 | 0.9136 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
litagin/Style-Bert-VITS2-1.0-base | litagin | 2023-12-27T11:05:43Z | 0 | 8 | null | [
"text-to-speech",
"ja",
"zh",
"en",
"region:us"
] | text-to-speech | 2023-12-26T17:05:07Z | ---
language:
- ja
- zh
- en
pipeline_tag: text-to-speech
---
# Style-Bert-VITS2 base model
The base model of [Style-Bert-VITS2](https://github.com/litagin02/Style-Bert-VITS2).
- These models are essentially the safetensors versions of [Bert-VITS2](https://github.com/fishaudio/Bert-VITS2) 2.1 base models (emo-related key in G_0 deleted).
- Hence all credits go to the original authors of Bert-VITS2 2.1 base models and [Fish Audio](https://github.com/fishaudio). |
EleanorLin/bert-finetuned-squad | EleanorLin | 2023-12-27T10:59:10Z | 22 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-12-26T18:31:41Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
renardkorzeniowski/wav2vec2-base-finetuned-ks | renardkorzeniowski | 2023-12-27T10:55:08Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-12-27T10:47:31Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- superb
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.3769
- eval_accuracy: 0.6209
- eval_runtime: 91.7806
- eval_samples_per_second: 74.068
- eval_steps_per_second: 2.321
- epoch: 0.06
- step: 25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.14.1
|
ntc-ai/SDXL-LoRA-slider.inflateable | ntc-ai | 2023-12-27T10:50:37Z | 12 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2023-12-27T10:50:33Z |
---
language:
- en
thumbnail: "images/evaluate/inflateable.../inflateable_17_3.0.png"
widget:
- text: inflateable
output:
url: images/inflateable_17_3.0.png
- text: inflateable
output:
url: images/inflateable_19_3.0.png
- text: inflateable
output:
url: images/inflateable_20_3.0.png
- text: inflateable
output:
url: images/inflateable_21_3.0.png
- text: inflateable
output:
url: images/inflateable_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "inflateable"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - inflateable (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/inflateable_17_-3.0.png" width=256 height=256 /> | <img src="images/inflateable_17_0.0.png" width=256 height=256 /> | <img src="images/inflateable_17_3.0.png" width=256 height=256 /> |
| <img src="images/inflateable_19_-3.0.png" width=256 height=256 /> | <img src="images/inflateable_19_0.0.png" width=256 height=256 /> | <img src="images/inflateable_19_3.0.png" width=256 height=256 /> |
| <img src="images/inflateable_20_-3.0.png" width=256 height=256 /> | <img src="images/inflateable_20_0.0.png" width=256 height=256 /> | <img src="images/inflateable_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
inflateable
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.inflateable', weight_name='inflateable.safetensors', adapter_name="inflateable")
# Activate the LoRA
pipe.set_adapters(["inflateable"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, inflateable"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 660+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
DataVare/DataVare-PST-To-PDF-Converter | DataVare | 2023-12-27T10:45:58Z | 0 | 0 | null | [
"region:us"
] | null | 2023-12-27T10:43:36Z | Use the expert DataVare Outlook PST to PDF Converter Tool to export Outlook OST and PST files to Adobe PDF file format with a few clicks of the mouse. It converts PST file emails with attachments to PDF in bulk without creating any errors.
All editions of MS Outlook, such as 2021, 2019, 2016, 2013, 2010, 2007, etc, are performed with it. This application works with all Windows and Adobe PDF versions. This software comes with complete features box as - an easy and quick GUI, fully tested by all online scanners and antiviruses, users can store the resultant file at their chosen location, a free demo is also offered, and 24x7 technical support is availiable to all the users for fix theirs problem and queries.
Read More:- https://www.datavare.com/software/pst-to-pdf-converter-expert.html |
judithrosell/BlueBERT_CRAFT_NER_new | judithrosell | 2023-12-27T10:38:24Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12",
"base_model:finetune:bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-27T10:22:28Z | ---
license: cc0-1.0
base_model: bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BlueBERT_CRAFT_NER_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BlueBERT_CRAFT_NER_new
This model is a fine-tuned version of [bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12](https://huggingface.co/bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1392
- Precision: 0.8229
- Recall: 0.7998
- F1: 0.8112
- Accuracy: 0.9659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2722 | 1.0 | 695 | 0.1429 | 0.7839 | 0.7856 | 0.7847 | 0.9603 |
| 0.0811 | 2.0 | 1390 | 0.1351 | 0.8229 | 0.7933 | 0.8078 | 0.9654 |
| 0.0421 | 3.0 | 2085 | 0.1392 | 0.8229 | 0.7998 | 0.8112 | 0.9659 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
stagvn/vi-fin-news | stagvn | 2023-12-27T10:27:38Z | 43 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"vi",
"base_model:FPTAI/vibert-base-cased",
"base_model:finetune:FPTAI/vibert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-22T03:08:39Z | ---
base_model: FPTAI/vibert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vi-fin-news
results: []
license: apache-2.0
language:
- vi
library_name: transformers
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-fin-news
This model is a fine-tuned version of [FPTAI/vibert-base-cased](https://huggingface.co/FPTAI/vibert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4509
- Accuracy: 0.9136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1176 | 1.0 | 1150 | 0.3566 | 0.9181 |
| 0.0582 | 2.0 | 2300 | 0.4509 | 0.9136 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3 |
lakshay/llama2-test | lakshay | 2023-12-27T10:22:36Z | 2 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-11-22T06:20:13Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
kghanlon/distilbert-base-uncased-RILE-v1_frozen_4 | kghanlon | 2023-12-27T10:22:29Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T09:48:58Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
model-index:
- name: distilbert-base-uncased-RILE-v1_frozen_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-RILE-v1_frozen_4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8650
- Accuracy: 0.7404
- Recall: 0.7404
- F1: 0.7399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|
| 0.6951 | 1.0 | 15490 | 0.6812 | 0.7140 | 0.7140 | 0.7124 |
| 0.6449 | 2.0 | 30980 | 0.6663 | 0.7282 | 0.7282 | 0.7292 |
| 0.5592 | 3.0 | 46470 | 0.6709 | 0.7376 | 0.7376 | 0.7366 |
| 0.4423 | 4.0 | 61960 | 0.7656 | 0.7389 | 0.7389 | 0.7376 |
| 0.3611 | 5.0 | 77450 | 0.8650 | 0.7404 | 0.7404 | 0.7399 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
freyagracia/indonesian-roberta-base-posp-tagger-finetuned-tweet_pemilu_postagger | freyagracia | 2023-12-27T10:10:36Z | 1 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"base_model:w11wo/indonesian-roberta-base-posp-tagger",
"base_model:finetune:w11wo/indonesian-roberta-base-posp-tagger",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-12-27T10:00:23Z | ---
license: mit
base_model: w11wo/indonesian-roberta-base-posp-tagger
tags:
- generated_from_keras_callback
model-index:
- name: freyagracia/indonesian-roberta-base-posp-tagger-finetuned-tweet_pemilu_postagger
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# freyagracia/indonesian-roberta-base-posp-tagger-finetuned-tweet_pemilu_postagger
This model is a fine-tuned version of [w11wo/indonesian-roberta-base-posp-tagger](https://huggingface.co/w11wo/indonesian-roberta-base-posp-tagger) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.1066
- Validation Loss: 7.8001
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -969, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 11.8409 | 11.4612 | 0 |
| 10.7746 | 10.0532 | 1 |
| 9.5651 | 9.0262 | 2 |
| 8.7226 | 8.2780 | 3 |
| 8.1066 | 7.8001 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
li-jay-cs/gpt2-medium-supervised-summarize-checkpoint | li-jay-cs | 2023-12-27T09:53:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-11-24T07:29:38Z | ---
license: mit
base_model: gpt2-medium
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: gpt2-medium-supervised-summarize-checkpoint
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-supervised-summarize-checkpoint
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7422
- Rouge1: 0.6035
- Rouge2: 0.2047
- Rougel: 0.4141
- Rougelsum: 0.5319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.859 | 0.21 | 500 | 1.8105 | 0.5966 | 0.1961 | 0.4025 | 0.5237 |
| 1.852 | 0.43 | 1000 | 1.7900 | 0.5994 | 0.1981 | 0.4061 | 0.5271 |
| 1.8189 | 0.64 | 1500 | 1.7764 | 0.6000 | 0.2005 | 0.4082 | 0.5276 |
| 1.8191 | 0.86 | 2000 | 1.7695 | 0.6013 | 0.2009 | 0.4096 | 0.5290 |
| 1.7969 | 1.07 | 2500 | 1.7617 | 0.6038 | 0.2020 | 0.4108 | 0.5311 |
| 1.7967 | 1.28 | 3000 | 1.7578 | 0.6024 | 0.2024 | 0.4114 | 0.5304 |
| 1.7813 | 1.5 | 3500 | 1.7520 | 0.6038 | 0.2039 | 0.4128 | 0.5320 |
| 1.7704 | 1.71 | 4000 | 1.7480 | 0.6033 | 0.2045 | 0.4132 | 0.5310 |
| 1.7852 | 1.93 | 4500 | 1.7422 | 0.6035 | 0.2047 | 0.4141 | 0.5319 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
nitinbhayana/Llama-2-7b-chat-hf-adapter-client-361 | nitinbhayana | 2023-12-27T09:49:39Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-12-27T09:49:29Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
MarkTsao/HW-01 | MarkTsao | 2023-12-27T09:48:52Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-23T01:41:56Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: HW-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HW-01
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8505
- Matthews Correlation: 0.5410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5194 | 1.0 | 535 | 0.4550 | 0.4394 |
| 0.3454 | 2.0 | 1070 | 0.4657 | 0.5110 |
| 0.2382 | 3.0 | 1605 | 0.6482 | 0.5196 |
| 0.1609 | 4.0 | 2140 | 0.7522 | 0.5390 |
| 0.1207 | 5.0 | 2675 | 0.8505 | 0.5410 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
kghanlon/distilbert-base-uncased-RILE-v1_fully_frozen | kghanlon | 2023-12-27T09:46:16Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T09:24:40Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
model-index:
- name: distilbert-base-uncased-RILE-v1_fully_frozen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-RILE-v1_fully_frozen
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8010
- Accuracy: 0.6370
- Recall: 0.6370
- F1: 0.6323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|
| 0.8514 | 1.0 | 15490 | 0.8465 | 0.6078 | 0.6078 | 0.5986 |
| 0.8486 | 2.0 | 30980 | 0.8245 | 0.6248 | 0.6248 | 0.6247 |
| 0.8332 | 3.0 | 46470 | 0.8174 | 0.6260 | 0.6260 | 0.6161 |
| 0.8157 | 4.0 | 61960 | 0.8080 | 0.6317 | 0.6317 | 0.6222 |
| 0.8163 | 5.0 | 77450 | 0.8010 | 0.6370 | 0.6370 | 0.6323 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
MaziyarPanahi/vision_pro_dreambooth_project | MaziyarPanahi | 2023-12-27T09:39:12Z | 33 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"vision-pro",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-12-25T14:59:06Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- vision-pro
widget:
- text: photo of a sks Vision Pro
output:
url: images/Apple-WWCD23-Vision-Pro-with-battery-230605.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
pipeline_tag: text-to-image
---
# vision_pro_dreambooth_project
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/MaziyarPanahi/vision_pro_dreambooth_project/tree/main) them in the Files & versions tab. |
Wolverine01/dqn-SpaceInvadersNoFrameskip-v4 | Wolverine01 | 2023-12-27T09:31:18Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-27T09:30:40Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 680.50 +/- 327.38
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Wolverine01 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Wolverine01 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Wolverine01
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
mesa44/SpaceInvadersNoFrameskip-v4 | mesa44 | 2023-12-27T09:16:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-27T09:15:45Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 698.00 +/- 241.27
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mesa44 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mesa44 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mesa44
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
tiennguyenbnbk/teacher-status-van-tiny-256-0 | tiennguyenbnbk | 2023-12-27T09:05:53Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"van",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:Visual-Attention-Network/van-tiny",
"base_model:finetune:Visual-Attention-Network/van-tiny",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-27T08:01:39Z | ---
license: apache-2.0
base_model: Visual-Attention-Network/van-tiny
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- recall
- precision
model-index:
- name: teacher-status-van-tiny-256-0
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9777777777777777
- name: Recall
type: recall
value: 0.9893162393162394
- name: Precision
type: precision
value: 0.9788583509513742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# teacher-status-van-tiny-256-0
This model is a fine-tuned version of [Visual-Attention-Network/van-tiny](https://huggingface.co/Visual-Attention-Network/van-tiny) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0672
- Accuracy: 0.9778
- F1 Score: 0.9841
- Recall: 0.9893
- Precision: 0.9789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|:---------:|
| 0.6788 | 0.99 | 47 | 0.6437 | 0.6933 | 0.8189 | 1.0 | 0.6933 |
| 0.463 | 2.0 | 95 | 0.3406 | 0.8756 | 0.9162 | 0.9808 | 0.8596 |
| 0.3596 | 2.99 | 142 | 0.2072 | 0.9304 | 0.9504 | 0.9615 | 0.9395 |
| 0.3505 | 4.0 | 190 | 0.1564 | 0.9526 | 0.9661 | 0.9744 | 0.9580 |
| 0.2962 | 4.99 | 237 | 0.1262 | 0.9556 | 0.9681 | 0.9722 | 0.9640 |
| 0.2762 | 6.0 | 285 | 0.1038 | 0.9644 | 0.9745 | 0.9808 | 0.9684 |
| 0.2604 | 6.99 | 332 | 0.0932 | 0.9719 | 0.9798 | 0.9829 | 0.9766 |
| 0.2427 | 8.0 | 380 | 0.0928 | 0.9719 | 0.9797 | 0.9786 | 0.9807 |
| 0.2465 | 8.99 | 427 | 0.0898 | 0.9719 | 0.9797 | 0.9786 | 0.9807 |
| 0.2519 | 10.0 | 475 | 0.0913 | 0.9689 | 0.9775 | 0.9765 | 0.9786 |
| 0.2258 | 10.99 | 522 | 0.0847 | 0.9733 | 0.9809 | 0.9872 | 0.9747 |
| 0.2184 | 12.0 | 570 | 0.0812 | 0.9793 | 0.9851 | 0.9893 | 0.9809 |
| 0.2208 | 12.99 | 617 | 0.0693 | 0.9807 | 0.9861 | 0.9872 | 0.9851 |
| 0.2201 | 14.0 | 665 | 0.0628 | 0.9763 | 0.9829 | 0.9850 | 0.9809 |
| 0.2251 | 14.99 | 712 | 0.0811 | 0.9733 | 0.9810 | 0.9915 | 0.9707 |
| 0.2135 | 16.0 | 760 | 0.0718 | 0.9763 | 0.9829 | 0.9850 | 0.9809 |
| 0.1851 | 16.99 | 807 | 0.0791 | 0.9763 | 0.9830 | 0.9872 | 0.9788 |
| 0.2152 | 18.0 | 855 | 0.0737 | 0.9748 | 0.9818 | 0.9808 | 0.9829 |
| 0.1871 | 18.99 | 902 | 0.0814 | 0.9763 | 0.9830 | 0.9872 | 0.9788 |
| 0.1714 | 20.0 | 950 | 0.0692 | 0.9763 | 0.9830 | 0.9893 | 0.9768 |
| 0.188 | 20.99 | 997 | 0.0641 | 0.9778 | 0.9840 | 0.9850 | 0.9829 |
| 0.191 | 22.0 | 1045 | 0.0644 | 0.9793 | 0.9851 | 0.9872 | 0.9830 |
| 0.2025 | 22.99 | 1092 | 0.0675 | 0.9793 | 0.9850 | 0.9829 | 0.9871 |
| 0.1753 | 24.0 | 1140 | 0.0655 | 0.9822 | 0.9872 | 0.9893 | 0.9851 |
| 0.1857 | 24.99 | 1187 | 0.0731 | 0.9793 | 0.9851 | 0.9915 | 0.9789 |
| 0.2007 | 26.0 | 1235 | 0.0677 | 0.9793 | 0.9851 | 0.9915 | 0.9789 |
| 0.2086 | 26.99 | 1282 | 0.0640 | 0.9793 | 0.9851 | 0.9893 | 0.9809 |
| 0.1666 | 28.0 | 1330 | 0.0712 | 0.9778 | 0.9841 | 0.9893 | 0.9789 |
| 0.157 | 28.99 | 1377 | 0.0661 | 0.9807 | 0.9862 | 0.9893 | 0.9830 |
| 0.1758 | 29.68 | 1410 | 0.0672 | 0.9778 | 0.9841 | 0.9893 | 0.9789 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
xaviviro/llama-2-7b-chat-catala | xaviviro | 2023-12-27T09:02:39Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ca",
"en",
"dataset:xaviviro/oasst1_ca_threads",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:finetune:NousResearch/Llama-2-7b-chat-hf",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-26T22:51:41Z | ---
base_model: NousResearch/Llama-2-7b-chat-hf
datasets:
- xaviviro/oasst1_ca_threads
language:
- ca
- en
model_type: llama
prompt_template: >-
<s>[INST] <<SYS>> Ets un xatbot genèric que sempre respon en català. <</SYS>>
{instruction} [/INST]
license: apache-2.0
---
# llama-2-7b-chat-catala
## Prompt template
```
<s>[INST] <<SYS>> Ets un xatbot genèric que sempre respon en català. <</SYS>> {instruction} [/INST]
``` |
0x7o/fialka-7B-v2.1 | 0x7o | 2023-12-27T08:57:25Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ru",
"dataset:0x7194633/fialka-v1-zephyr",
"dataset:0x7194633/oasst2-best-ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-26T08:08:34Z | ---
license: apache-2.0
datasets:
- 0x7194633/fialka-v1-zephyr
- 0x7194633/oasst2-best-ru
language:
- ru
pipeline_tag: text-generation
---
# Fialka v2.1 7B

## Description
Fialka language models are trained to follow instructions and maintain communication in Russian. Version 2.1 is trained on a dataset of [instructions](0x7194633/fialka-v1-zephyr) and [dialogs](https://huggingface.co/datasets/0x7194633/oasst2-best-ru).
**Check out our [new V3 model](https://huggingface.co/0x7194633/fialka-7B-v3), which generates Russian text more accurately and better.**
## Usage
The model has a query format as in zephyr.
```
<|user|>
Напиши код на python, который удалит файл `1.txt`.</s>
<|assistant|>
Для того чтобы удалить файл в Python необходимо использовать функцию os.remove(). Она принимает имя файла в качестве аргумента и выполняет операции удаления <...>
```
Check out the [space](https://huggingface.co/spaces/0x7194633/fialka) to use the model in UI without downloading.
|
Raihan004/Hierarchical_Agent_Action | Raihan004 | 2023-12-27T08:55:41Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-02T08:24:08Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: Hierarchical_Agent_Action
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: agent_action_class
type: image_folder
config: hierarchical-action-agent
split: train
args: hierarchical-action-agent
metrics:
- name: Accuracy
type: accuracy
value: 0.8402877697841726
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hierarchical_Agent_Action
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the agent_action_class dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5942
- Accuracy: 0.8403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.4407 | 0.81 | 100 | 2.2716 | 0.6058 |
| 1.7756 | 1.61 | 200 | 1.6162 | 0.7065 |
| 1.3948 | 2.42 | 300 | 1.2200 | 0.7698 |
| 1.131 | 3.23 | 400 | 1.0012 | 0.7856 |
| 0.9239 | 4.03 | 500 | 0.9055 | 0.7827 |
| 0.8699 | 4.84 | 600 | 0.8103 | 0.7827 |
| 0.6707 | 5.65 | 700 | 0.7610 | 0.7842 |
| 0.6206 | 6.45 | 800 | 0.7312 | 0.7885 |
| 0.5795 | 7.26 | 900 | 0.6989 | 0.8101 |
| 0.4914 | 8.06 | 1000 | 0.7066 | 0.7813 |
| 0.5087 | 8.87 | 1100 | 0.6398 | 0.8187 |
| 0.4373 | 9.68 | 1200 | 0.6293 | 0.8043 |
| 0.4365 | 10.48 | 1300 | 0.6726 | 0.7971 |
| 0.4517 | 11.29 | 1400 | 0.6047 | 0.8245 |
| 0.4114 | 12.1 | 1500 | 0.6088 | 0.8230 |
| 0.426 | 12.9 | 1600 | 0.6165 | 0.8201 |
| 0.3456 | 13.71 | 1700 | 0.6133 | 0.8259 |
| 0.332 | 14.52 | 1800 | 0.6736 | 0.8201 |
| 0.3646 | 15.32 | 1900 | 0.6406 | 0.8173 |
| 0.3287 | 16.13 | 2000 | 0.6978 | 0.7971 |
| 0.2793 | 16.94 | 2100 | 0.6433 | 0.8173 |
| 0.2924 | 17.74 | 2200 | 0.6474 | 0.8144 |
| 0.2605 | 18.55 | 2300 | 0.6279 | 0.8288 |
| 0.2016 | 19.35 | 2400 | 0.6361 | 0.8216 |
| 0.2524 | 20.16 | 2500 | 0.6394 | 0.8259 |
| 0.2017 | 20.97 | 2600 | 0.6683 | 0.8158 |
| 0.2082 | 21.77 | 2700 | 0.6389 | 0.8345 |
| 0.2751 | 22.58 | 2800 | 0.6141 | 0.8374 |
| 0.207 | 23.39 | 2900 | 0.6052 | 0.8259 |
| 0.1791 | 24.19 | 3000 | 0.6332 | 0.8230 |
| 0.1719 | 25.0 | 3100 | 0.5942 | 0.8403 |
| 0.1685 | 25.81 | 3200 | 0.6121 | 0.8360 |
| 0.1557 | 26.61 | 3300 | 0.6237 | 0.8345 |
| 0.1694 | 27.42 | 3400 | 0.6372 | 0.8317 |
| 0.1927 | 28.23 | 3500 | 0.6378 | 0.8273 |
| 0.1375 | 29.03 | 3600 | 0.6258 | 0.8331 |
| 0.1653 | 29.84 | 3700 | 0.6262 | 0.8331 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
AIYIYA/my_new_login4 | AIYIYA | 2023-12-27T08:54:13Z | 1 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T08:41:03Z | ---
base_model: bert-base-chinese
tags:
- generated_from_keras_callback
model-index:
- name: AIYIYA/my_new_login4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AIYIYA/my_new_login4
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3002
- Validation Loss: 0.3570
- Train Accuracy: 0.8732
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5716 | 0.4986 | 0.7887 | 0 |
| 0.3840 | 0.4054 | 0.8451 | 1 |
| 0.3002 | 0.3570 | 0.8732 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
Subsets and Splits