modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
boleklolek/olka | boleklolek | 2023-07-03T10:42:40Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T10:37:51Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### olka Dreambooth model trained by boleklolek with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
jordimas/bloom-ctranslate2 | jordimas | 2023-07-03T10:37:16Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-06-28T15:02:40Z | ---
license: bigscience-bloom-rail-1.0
---
# Bloom CTranslate2's model
This is a collection of some of the [Bigscience Bloom](https://huggingface.co/bigscience/bloom) exported to
[CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This allows to load and usage these models
efficently on CPU or GPU.
## Models
The models have been converted to *float16* and can be load in with any other quantification method (e.g. *int 8*).
| Model name | Description |
| --- | --- |
| [bloom-560m](https://huggingface.co/bigscience/bloom-560m) | 560M parameter model pretrained on ROOTS|
| [bloom-3b](https://huggingface.co/bigscience/bloom-3b) | 3B parameter model pretrained on ROOTS
| [bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1) | 7.1B parameter model finetuned on xP3|
| [bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) | 7.1B parameter model finetuned on xP3mt |
| [mt0-xxl-mt](https://huggingface.co/bigscience/mt0-xxl-mt) | 13B parameter model finetuned on xP3|
See [directories](https://huggingface.co/jordimas/bloom-ctranslate2/tree/main) for the different models available.
## Simple code to use them
Install dependencies:
```shell
pip install huggingface_hub ctranslate2 transformers torch
```
Usage:
```python
import huggingface_hub
import ctranslate2
import transformers
model_name = "bloomz-7b1"
prompt = "Hello, I am Joan and I am from Barcelona and"
repo_id = "jordimas/bloom-ctranslate2"
snapshot_folder = huggingface_hub.snapshot_download(repo_id = repo_id, allow_patterns=f"*{model_name}*")
print(f"folder: {snapshot_folder}")
model = f"{snapshot_folder}/{model_name}"
generator = ctranslate2.Generator(model, compute_type="int8")
tokenizer = transformers.AutoTokenizer.from_pretrained(model)
start_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))
results = generator.generate_batch([start_tokens], max_length=90)
result = tokenizer.decode(results[0].sequences_ids[0])
print(f"Result: {result}")
```
|
T-Systems-onsite/cross-en-pl-roberta-sentence-transformer | T-Systems-onsite | 2023-07-03T10:33:55Z | 15 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"en",
"pl",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language:
- en
- pl
license: mit
tags:
- sentence_embedding
--- |
T-Systems-onsite/cross-en-de-fr-roberta-sentence-transformer | T-Systems-onsite | 2023-07-03T10:33:40Z | 12 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"en",
"de",
"fr",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language:
- en
- de
- fr
license: mit
tags:
- sentence_embedding
--- |
ZidanSink/Kayessss | ZidanSink | 2023-07-03T10:11:35Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T10:09:49Z | ---
license: creativeml-openrail-m
---
|
ecwk/distilbert-git-commits-bugfix-classification | ecwk | 2023-07-03T10:09:49Z | 103 | 1 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-03T10:08:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-git-commits-bugfix-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-git-commits-bugfix-classification
This model is a fine-tuned version of [neuralsentry/distilbert-git-commits-mlm](https://huggingface.co/neuralsentry/distilbert-git-commits-mlm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5037
- Accuracy: 0.9231
- Precision: 0.85
- Recall: 1.0
- F1: 0.9189
- Roc Auc: 0.9318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 420
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.6837 | 1.0 | 22 | 0.6040 | 0.5897 | 0.5161 | 0.9412 | 0.6667 | 0.6297 |
| 0.3852 | 2.0 | 44 | 0.2881 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.2148 | 3.0 | 66 | 0.3807 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.0701 | 4.0 | 88 | 0.4934 | 0.8718 | 0.7727 | 1.0 | 0.8718 | 0.8864 |
| 0.0164 | 5.0 | 110 | 0.4892 | 0.8974 | 0.8095 | 1.0 | 0.8947 | 0.9091 |
| 0.0039 | 6.0 | 132 | 0.4929 | 0.8974 | 0.8095 | 1.0 | 0.8947 | 0.9091 |
| 0.0012 | 7.0 | 154 | 0.4065 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.0008 | 8.0 | 176 | 0.4837 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.0007 | 9.0 | 198 | 0.5000 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.0006 | 10.0 | 220 | 0.5037 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SebastianBodza/mpt-30B-qlora-multi_GPU | SebastianBodza | 2023-07-03T10:07:34Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-30T12:30:29Z | # MPT-7B LoRA Patch - multi GPU
Multi-GPU bugfix for MPT-30B
Patch based on: https://github.com/iwalton3/mpt-lora-patch
This is the Python model code for MPT-7B patched so that it can be used with a LoRA. Note that while I tested that it works and I get reasonable results out, it is very possible that the model isn't being trained correctly. The model code specifically says that left padding is not supported, but I forcibly did so and got decent results.
Note that when using LoRA, there is a strange quirk that prevents me from causing generation with an empty prompt.
I also included a model-agnostic `export_hf_checkpoint.py` script, which you can use to merge your lora back into a new full model. Once you do this, you do not need to use the patched version of the model code anymore. That being said, if you want to be able to load the model in 8bit you will still need it. The usage is `python export_hf_checkpoint.py <source> <lora> <dest>`.
If you would like to use this with `text-generation-webui`, apply the following patch:
```patch
--- a/modules/training.py
+++ b/modules/training.py
@@ -28,12 +28,13 @@ try:
MODEL_CLASSES = {v: k for k, v in MODEL_FOR_CAUSAL_LM_MAPPING_NAMES}
except:
standard_modules = ["q_proj", "v_proj"]
- model_to_lora_modules = {"llama": standard_modules, "opt": standard_modules, "gptj": standard_modules, "gpt_neox": ["query_key_value"]}
+ model_to_lora_modules = {"llama": standard_modules, "opt": standard_modules, "gptj": standard_modules, "gpt_neox": ["query_key_value"], "mpt": ["Wqkv"]}
MODEL_CLASSES = {
"LlamaForCausalLM": "llama",
"OPTForCausalLM": "opt",
"GPTJForCausalLM": "gptj",
- "GPTNeoXForCausalLM": "gpt_neox"
+ "GPTNeoXForCausalLM": "gpt_neox",
+ "MPTForCausalLM": "mpt"
}
WANT_INTERRUPT = False
```
You will need to run the webui with these options:
```bash
python server.py --model mosaicml_mpt-7b-instruct --trust-remote-code --load-in-8bit
```
You may also need to patch `bitsandbytes/nn/modules.py` to prevent running out of VRAM when saving the LoRA:
```patch
--- a/modules.py
+++ b/modules.py
@@ -259,13 +259,13 @@
if not self.state.has_fp16_weights and self.state.CB is None and self.state.CxB is not None:
# reorder weight layout back from ampere/turing to row
reorder_layout = True
- weight_clone = self.weight.data.clone()
+ weight_clone = self.weight.data
else:
reorder_layout = False
try:
if reorder_layout:
- self.weight.data = undo_layout(self.state.CxB, self.state.tile_indices)
+ self.weight.data = undo_layout(self.state.CxB.cpu(), self.state.tile_indices.cpu())
super()._save_to_state_dict(destination, prefix, keep_vars)
```
(It resides in `miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/nn/modules.py` for me.)
You can find the source model here: [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct)
The alterations are based on the [source code for the llama model](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py) from HF Transformers.
## Model License
CC-By-SA-3.0
|
sarthak101/my-pet-dog | sarthak101 | 2023-07-03T10:03:13Z | 0 | 0 | null | [
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-03T09:56:02Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by sarthak101 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CVRGU313
Sample pictures of this concept:
.jpeg)
.jpeg)
.jpeg)
.jpeg)
|
KJan05/KJan-Taxi-v3 | KJan05 | 2023-07-03T09:55:36Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T09:55:33Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: KJan-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="KJan05/KJan-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DucHaiten/DucHaiten-GoldenLife | DucHaiten | 2023-07-03T09:43:14Z | 0 | 2 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T09:39:14Z | ---
license: creativeml-openrail-m
---
|
msladic/Reinforce-Cartpole-v1 | msladic | 2023-07-03T09:38:21Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T09:36:07Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 462.02 +/- 85.78
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Cartpole-v1**
This is a trained model of a **Reinforce** agent playing **Cartpole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
aronmal/q-FrozenLake-v1-4x4-noSlippery | aronmal | 2023-07-03T09:37:17Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T09:37:14Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="aronmal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DucHaiten/DucHaiten-FANCYxFANCY | DucHaiten | 2023-07-03T09:36:13Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T09:31:51Z | ---
license: creativeml-openrail-m
---
|
OriginF/output | OriginF | 2023-07-03T09:34:08Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-20T08:28:55Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks lego
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - OriginF/output
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks lego using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
daiwenbin/xlm-roberta-base-finetuned-panx-de-fr | daiwenbin | 2023-07-03T09:28:37Z | 134 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-03T09:18:25Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2083
- F1: 0.8465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.36 | 1.0 | 715 | 0.2279 | 0.8163 |
| 0.1862 | 2.0 | 1430 | 0.1997 | 0.8363 |
| 0.1169 | 3.0 | 2145 | 0.2083 | 0.8465 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.13.3
|
sarada/t5-small-finetuned-xsum | sarada | 2023-07-03T09:24:54Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-03T09:21:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 61 | 3.0039 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Shularp/Helsinki_mul-en_test | Shularp | 2023-07-03T09:11:46Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-03T07:42:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: TestHelsinkiJpEn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TestHelsinkiJpEn
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-mul-en](https://huggingface.co/Helsinki-NLP/opus-mt-mul-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7084 | 1.0 | 2423 | 1.0513 |
| 0.8524 | 2.0 | 4846 | 1.0528 |
| 0.7534 | 3.0 | 7269 | 1.0740 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
joserodr68/q-FrozenLake-v1-4x4-noSlippery | joserodr68 | 2023-07-03T09:10:38Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T09:10:34Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="joserodr68/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Aeala/Enterredaas-65b-4bit-128g | Aeala | 2023-07-03T09:10:08Z | 6 | 1 | transformers | [
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-03T07:10:17Z | 4-bit GPTQ quantization of [Enterredaas-65b](https://huggingface.co/Aeala/Enterredaas-65b-QLoRA)
**Important Note**: This was trained in the *Alpaca* format, so prompting should be something like:
```
### Instruction:
<system prompt> (without the <>, this works like telling the AI what it is/purpose. i.e. like ChatGPT API's system prompt)
### Input:
<prompt> (without the <>)
### Response:
``` |
NancyAthghara23/red-panda-rpd | NancyAthghara23 | 2023-07-03T08:55:34Z | 10 | 3 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T08:52:05Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Red-Panda-rpd Dreambooth model trained by NancyAthghara23 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CVRGU151
Sample pictures of this concept:


|
Soojeong/female_hanbok_1e-7_ckpt_icb | Soojeong | 2023-07-03T08:32:21Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T06:33:25Z |
---
license: creativeml-openrail-m
base_model: model/chilloutmix_NiPrunedFp16Fix
instance_prompt: a photo of wearing hanbok
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Soojeong/female_hanbok_1e-7_ckpt_icb
This is a dreambooth model derived from model/chilloutmix_NiPrunedFp16Fix. The weights were trained on a photo of wearing hanbok using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
Pranjal-666/Reinforce-CartPole-v1 | Pranjal-666 | 2023-07-03T08:23:01Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T08:22:48Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
manmyung/q-FrozenLake-v1-4x4-noSlippery | manmyung | 2023-07-03T07:53:56Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T07:53:54Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="manmyung/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
anirbankgec/my_awesome_qa_model | anirbankgec | 2023-07-03T07:53:29Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-30T05:20:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.1636 |
| 2.6442 | 2.0 | 500 | 1.6647 |
| 2.6442 | 3.0 | 750 | 1.5982 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
somendas17/my-pet-cat-meow | somendas17 | 2023-07-03T07:48:42Z | 7 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T07:45:17Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-meow Dreambooth model trained by somendas17 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CVRGU541
Sample pictures of this concept:

|
nomad-ai/poca-SoccerTwos-test | nomad-ai | 2023-07-03T07:37:43Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-07-03T07:37:36Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nomad-ai/poca-SoccerTwos-test
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
heka-ai/e5-90k | heka-ai | 2023-07-03T07:31:44Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-07-03T07:31:39Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# heka-ai/e5-90k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('heka-ai/e5-90k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=heka-ai/e5-90k)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 10000 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 100000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
vladkolev/distilroberta-base-finetuned-emotion | vladkolev | 2023-07-03T07:27:32Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-03-21T08:29:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilroberta-base-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3438
- Accuracy: 0.9004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.615 | 1.0 | 748 | 0.2832 | 0.9004 |
| 0.2716 | 2.0 | 1496 | 0.2632 | 0.9044 |
| 0.1929 | 3.0 | 2244 | 0.3124 | 0.9071 |
| 0.1559 | 4.0 | 2992 | 0.3258 | 0.8971 |
| 0.1185 | 5.0 | 3740 | 0.3438 | 0.9004 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
Bugsys0302/merucbslor | Bugsys0302 | 2023-07-03T07:24:39Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T07:16:11Z | ---
license: creativeml-openrail-m
---
|
vlkn/bloom1b_instruct | vlkn | 2023-07-03T07:18:55Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-07-03T07:15:45Z | ---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: bloom1b_instruct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom1b_instruct
This model is a fine-tuned version of [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rohanbalkondekar/chat-doc | rohanbalkondekar | 2023-07-03T07:18:37Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-03T07:18:31Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [facebook/opt-125m](https://huggingface.co/facebook/opt-125m)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.30.1
pip install accelerate==0.20.3
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="BeRohan/chat-doc",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"BeRohan/chat-doc",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"BeRohan/chat-doc",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "BeRohan/chat-doc" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
OPTForCausalLM(
(model): OPTModel(
(decoder): OPTDecoder(
(embed_tokens): Embedding(50272, 768, padding_idx=1)
(embed_positions): OPTLearnedPositionalEmbedding(2050, 768)
(final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(layers): ModuleList(
(0-11): 12 x OPTDecoderLayer(
(self_attn): OPTAttention(
(k_proj): Linear(in_features=768, out_features=768, bias=True)
(v_proj): Linear(in_features=768, out_features=768, bias=True)
(q_proj): Linear(in_features=768, out_features=768, bias=True)
(out_proj): Linear(in_features=768, out_features=768, bias=True)
)
(activation_fn): ReLU()
(self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
)
)
)
)
(lm_head): Linear(in_features=768, out_features=50272, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=BeRohan/chat-doc --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
Jumtra/rinna-3.6b-tune-ep5 | Jumtra | 2023-07-03T07:09:36Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ja",
"lm",
"nlp",
"dataset:kunishou/databricks-dolly-15k-ja",
"dataset:kunishou/hh-rlhf-49k-ja",
"dataset:kunishou/cnn-dailymail-27k-ja",
"dataset:Jumtra/oasst1_ja",
"dataset:Jumtra/jglue_jnli",
"dataset:Jumtra/jglue_jsquad",
"dataset:Jumtra/jglue_jsquads_with_input",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-25T08:59:24Z | ---
license: mit
tags:
- ja
- gpt_neox
- text-generation
- lm
- nlp
datasets:
- kunishou/databricks-dolly-15k-ja
- kunishou/hh-rlhf-49k-ja
- kunishou/cnn-dailymail-27k-ja
- Jumtra/oasst1_ja
- Jumtra/jglue_jnli
- Jumtra/jglue_jsquad
- Jumtra/jglue_jsquads_with_input
inference: false
language:
- ja
---
# rinna-3.6b
このモデルは、MosaicMLのllm-foundryリポジトリを使用して[rinna/japanese-gpt-neox-3.6b](https://huggingface.co/rinna/japanese-gpt-neox-3.6b)をファインチューニングしたモデルです。
## Model Date
June 28, 2023
## Model License
MIT
## 評価
[Jumtra/test_data_100QA](https://huggingface.co/datasets/Jumtra/test_data_100QA)を用いてモデルの正答率を評価した
また、学習時のvalidateデータに対してのPerplexityを記載した。
| model name | 正答率 | Perplexity |
| ---- | ---- | ---- |
| [Jumtra/rinna-3.6b-tune-ep5](https://huggingface.co/Jumtra/rinna-3.6b-tune-ep5)| 40/100 | 8.105 |
| [Jumtra/rinna-v1-tune-ep1](https://huggingface.co/Jumtra/rinna-v1-tune-ep1) | 42/100 | 7.458 |
| [Jumtra/rinna-v1-tune-ep3](https://huggingface.co/Jumtra/rinna-v1-tune-ep3) | 41/100 | 7.034 |
| [Jumtra/calm-7b-tune-ep4](https://huggingface.co/Jumtra/calm-7b-tune-ep4) | 40/100 | 9.766 |
| [Jumtra/calm-v3-ep1](https://huggingface.co/Jumtra/calm-v3-ep1) | 35/100 | 9.305 |
| [Jumtra/calm-v3-ep3](https://huggingface.co/Jumtra/calm-v3-ep3) | 37/100 | 13.276 |
以下のプロンプトを用いた
```python
INSTRUCTION_KEY = "### 入力:"
RESPONSE_KEY = "### 回答:"
INTRO_BLURB = "以下はタスクを説明する指示と文脈のある文章が含まれた入力です。要求を適切に満たす回答を生成しなさい。"
JP_PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
```
|
Jumtra/calm-7b-tune-ep4 | Jumtra | 2023-07-03T07:09:11Z | 18 | 1 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ja",
"lm",
"nlp",
"dataset:kunishou/databricks-dolly-15k-ja",
"dataset:kunishou/hh-rlhf-49k-ja",
"dataset:kunishou/cnn-dailymail-27k-ja",
"dataset:Jumtra/oasst1_ja",
"dataset:Jumtra/jglue_jnli",
"dataset:Jumtra/jglue_jsquad",
"dataset:Jumtra/jglue_jsquads_with_input",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-25T09:01:35Z | ---
license: cc-by-sa-4.0
tags:
- ja
- gpt_neox
- text-generation
- lm
- nlp
datasets:
- kunishou/databricks-dolly-15k-ja
- kunishou/hh-rlhf-49k-ja
- kunishou/cnn-dailymail-27k-ja
- Jumtra/oasst1_ja
- Jumtra/jglue_jnli
- Jumtra/jglue_jsquad
- Jumtra/jglue_jsquads_with_input
inference: false
language:
- ja
---
# open-calm-7b
このモデルは、MosaicMLのllm-foundryリポジトリを使用して[cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)をファインチューニングしたモデルです。
## Model Date
June 28, 2023
## Model License
cc-by-sa-4.0
## 評価
[Jumtra/test_data_100QA](https://huggingface.co/datasets/Jumtra/test_data_100QA)を用いてモデルの正答率を評価した
また、学習時のvalidateデータに対してのPerplexityを記載した。
| model name | 正答率 | Perplexity |
| ---- | ---- | ---- |
| [Jumtra/rinna-3.6b-tune-ep5](https://huggingface.co/Jumtra/rinna-3.6b-tune-ep5)| 40/100 | 8.105 |
| [Jumtra/rinna-v1-tune-ep1](https://huggingface.co/Jumtra/rinna-v1-tune-ep1) | 42/100 | 7.458 |
| [Jumtra/rinna-v1-tune-ep3](https://huggingface.co/Jumtra/rinna-v1-tune-ep3) | 41/100 | 7.034 |
| [Jumtra/calm-7b-tune-ep4](https://huggingface.co/Jumtra/calm-7b-tune-ep4) | 40/100 | 9.766 |
| [Jumtra/calm-v3-ep1](https://huggingface.co/Jumtra/calm-v3-ep1) | 35/100 | 9.305 |
| [Jumtra/calm-v3-ep3](https://huggingface.co/Jumtra/calm-v3-ep3) | 37/100 | 13.276 |
以下のプロンプトを用いた
```python
INSTRUCTION_KEY = "### 入力:"
RESPONSE_KEY = "### 回答:"
INTRO_BLURB = "以下はタスクを説明する指示と文脈のある文章が含まれた入力です。要求を適切に満たす回答を生成しなさい。"
JP_PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
``` |
NasimB/gpt2-cl-rarity-sampling-5 | NasimB | 2023-07-03T07:01:49Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-03T04:30:07Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cl-rarity-sampling-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cl-rarity-sampling-5
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.6015 | 0.05 | 500 | 5.8621 |
| 5.3617 | 0.11 | 1000 | 5.4637 |
| 5.0237 | 0.16 | 1500 | 5.2314 |
| 4.8011 | 0.22 | 2000 | 5.0828 |
| 4.6311 | 0.27 | 2500 | 4.9993 |
| 4.504 | 0.33 | 3000 | 4.9326 |
| 4.3948 | 0.38 | 3500 | 4.8809 |
| 4.2939 | 0.44 | 4000 | 4.8421 |
| 4.2022 | 0.49 | 4500 | 4.8057 |
| 4.1111 | 0.55 | 5000 | 4.7772 |
| 4.0184 | 0.6 | 5500 | 4.7492 |
| 3.9458 | 0.66 | 6000 | 4.7347 |
| 3.8712 | 0.71 | 6500 | 4.7195 |
| 3.8079 | 0.77 | 7000 | 4.7051 |
| 3.7575 | 0.82 | 7500 | 4.6946 |
| 3.716 | 0.88 | 8000 | 4.6904 |
| 3.6978 | 0.93 | 8500 | 4.6861 |
| 3.6899 | 0.99 | 9000 | 4.6848 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
vn0161/autotrain-bhoj-5n53-vq5m-71714138701 | vn0161 | 2023-07-03T07:01:14Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:vn0161/autotrain-data-bhoj-5n53-vq5m",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-03T07:00:26Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- vn0161/autotrain-data-bhoj-5n53-vq5m
co2_eq_emissions:
emissions: 0.37493319480549947
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
- CO2 Emissions (in grams): 0.3749
## Validation Metrics
loss: 0.35270485281944275
f1: 0.8472906403940886
precision: 0.8958333333333334
recall: 0.8037383177570093
auc: 0.9286837278364922
accuracy: 0.8551401869158879
|
nolanaatama/phngyfrmfvnghtstfrddysrvcv2300pchnlgspdrwb | nolanaatama | 2023-07-03T06:51:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T06:37:26Z | ---
license: creativeml-openrail-m
---
|
veluchs/whisper-small-dv | veluchs | 2023-07-03T06:48:19Z | 87 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-03T05:21:24Z | ---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: 'Whisper Small - Dhivehi '
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.509754146816427
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small - Dhivehi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1709
- Wer Ortho: 62.8665
- Wer: 13.5098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1243 | 1.63 | 500 | 0.1709 | 62.8665 | 13.5098 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dyedream/ppo-Pyramids | dyedream | 2023-07-03T05:56:55Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-03T05:56:48Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dyedream/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Shubham09/falcon7b-test-updated-policies | Shubham09 | 2023-07-03T05:55:47Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-03T05:55:25Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
google/umt5-base | google | 2023-07-03T05:37:52Z | 1,831 | 13 | transformers | [
"transformers",
"pytorch",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:mc4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-02T01:49:59Z | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
license: apache-2.0
---
[Google's UMT5](https://github.com/google-research/multilingual-t5)
UMT5 is pretrained on the an updated version of [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 107 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=umt5)
Paper: [UniMax, Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi)
Authors: *by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant*
## Abstract
*Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.* |
Tiru8055/ppo-SnowballTarget | Tiru8055 | 2023-07-03T05:28:11Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-03T05:12:32Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Tiru8055/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hopkins/mbart-finetuned-eng-kor-50 | hopkins | 2023-07-03T05:01:57Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T04:44:13Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-50
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9913
- Bleu: 7.0488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
xzuyn/GPT-2-XL-1.5B-GGML | xzuyn | 2023-07-03T05:00:04Z | 0 | 1 | null | [
"gpt2",
"gpt-2",
"region:us"
] | null | 2023-05-23T04:05:46Z | ---
tags:
- gpt2
- gpt-2
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/gpt2-xl |
Shaltear/_license_plates | Shaltear | 2023-07-03T04:57:47Z | 28 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-07-02T21:52:13Z | ---
tags:
- generated_from_trainer
model-index:
- name: _license_plates
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# _license_plates
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chriskim2273/IOTNation_CompanyName_Extraction_QA_Model_1.2_Roberta | chriskim2273 | 2023-07-03T04:50:05Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-03T04:13:01Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: IOTNation_CompanyName_Extraction_QA_Model_1.2_Roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_CompanyName_Extraction_QA_Model_1.2_Roberta
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 45 | 0.5443 |
| No log | 2.0 | 90 | 0.6332 |
| No log | 3.0 | 135 | 0.6942 |
| No log | 4.0 | 180 | 0.6725 |
| No log | 5.0 | 225 | 0.7219 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-50 | hopkins | 2023-07-03T04:24:57Z | 57 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T04:06:46Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-50
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6559
- Bleu: 21.0004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
deepghs/imgutils-models | deepghs | 2023-07-03T04:12:18Z | 0 | 6 | null | [
"onnx",
"dataset:deepghs/chafen_arknights",
"dataset:deepghs/monochrome_danbooru",
"license:mit",
"region:us"
] | null | 2023-03-11T08:37:38Z | ---
license: mit
datasets:
- deepghs/chafen_arknights
- deepghs/monochrome_danbooru
metrics:
- accuracy
---
# imgutils-models
This repository includes all the models in [deepghs/imgutils](https://github.com/deepghs/imgutils).
## LPIPS
This model is used for clustering anime images (named `差分` in Chinese), based on [richzhang/PerceptualSimilarity](https://github.com/richzhang/PerceptualSimilarity), trained with dataset [deepghs/chafen_arknights(private)](https://huggingface.co/datasets/deepghs/chafen_arknights).
When threshold is `0.45`, the [adjusted rand score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html) can reach `0.995`.
File lists:
* `lpips_diff.onnx`, feature difference.
* `lpips_feature.onnx`, feature extracting.
## Monochrome
These model is used for monochrome image classification, based on CNNs and Transformers, trained with dataset [deepghs/monochrome_danbooru(private)](https://huggingface.co/datasets/deepghs/monochrome_danbooru).
The following are the checkpoints that have been formally put into use, all based on the Caformer architecture:
| Checkpoint | Algorithm | Safe Level | Accuracy | False Negative | False Positive |
|:----------------------------:|:---------:|:----------:|:----------:|:--------------:|:--------------:|
| monochrome-caformer-40 | caformer | 0 | 96.41% | 2.69% | 0.89% |
| **monochrome-caformer-110** | caformer | 0 | **96.97%** | 1.57% | 1.46% |
| monochrome-caformer_safe2-80 | caformer | 2 | 94.84% | **1.12%** | 4.03% |
| monochrome-caformer_safe4-70 | caformer | 4 | 94.28% | **0.67%** | 5.04% |
**`monochrome-caformer-110` has the best overall accuracy** among them, but considering that this model is often used to screen out monochrome images
and we want to screen out as many as possible without omission, we have also introduced weighted models (`safe2` and `safe4`).
Although their overall accuracy has been slightly reduced, the probability of False Negative (misidentifying a monochrome image as a colored one) is lower,
making them more suitable for batch screening.
## Deepdanbooru
`deepdanbooru` is a model used to tag anime images. Here, we provide a table for tag classification called `deepdanbooru_tags.csv`,
as well as an ONNX model (from [chinoll/deepdanbooru](https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags)).
It's worth noting that due to the poor quality of the deepdanbooru model itself and the relatively old dataset,
it is only for testing purposes and is not recommended to be used as the main classification model. We recommend using the `wd14` model instead, see:
* https://huggingface.co/spaces/SmilingWolf/wd-v1-4-tags
|
hopkins/mbart-finetuned-eng-ind-49 | hopkins | 2023-07-03T04:11:46Z | 62 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:53:54Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-49
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7653
- Bleu: 22.0600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-48 | hopkins | 2023-07-03T04:09:30Z | 55 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:51:42Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-48
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7655
- Bleu: 21.8820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
vineetsharma/whisper-base-finetuned-gtzan | vineetsharma | 2023-07-03T04:03:36Z | 49 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-03T01:16:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: whisper-base-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-finetuned-gtzan
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6867
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9075 | 1.0 | 57 | 1.0000 | 0.58 |
| 0.4569 | 2.0 | 114 | 0.6073 | 0.83 |
| 0.3761 | 3.0 | 171 | 0.6410 | 0.8 |
| 0.3049 | 4.0 | 228 | 0.4536 | 0.86 |
| 0.0284 | 5.0 | 285 | 0.5120 | 0.85 |
| 0.0165 | 6.0 | 342 | 0.4856 | 0.89 |
| 0.0087 | 7.0 | 399 | 0.6814 | 0.87 |
| 0.0038 | 8.0 | 456 | 0.7059 | 0.85 |
| 0.0032 | 9.0 | 513 | 0.6831 | 0.87 |
| 0.0034 | 10.0 | 570 | 0.6867 | 0.87 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3 |
hopkins/mbart-finetuned-eng-ind-47 | hopkins | 2023-07-03T03:59:13Z | 49 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:41:18Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-47
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7657
- Bleu: 21.8229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-48 | hopkins | 2023-07-03T03:51:14Z | 54 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:33:01Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-48
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6525
- Bleu: 20.8386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-46 | hopkins | 2023-07-03T03:48:03Z | 59 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:34:15Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-46
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-46
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7723
- Bleu: 21.7789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-45 | hopkins | 2023-07-03T03:34:35Z | 44 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:16:54Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-45
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9907
- Bleu: 7.0592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-46 | hopkins | 2023-07-03T03:33:45Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:15:41Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-46
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-46
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6533
- Bleu: 20.8950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-44 | hopkins | 2023-07-03T03:32:33Z | 55 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T03:14:52Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-44
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9949
- Bleu: 6.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chriskim2273/IOTNation_CompanyName_Extraction_QA_Model_1.1 | chriskim2273 | 2023-07-03T03:29:23Z | 43 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-03T03:26:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: IOTNation_CompanyName_Extraction_QA_Model_1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_CompanyName_Extraction_QA_Model_1.1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 37 | 0.7508 |
| No log | 2.0 | 74 | 0.4030 |
| No log | 3.0 | 111 | 0.3860 |
| No log | 4.0 | 148 | 0.4186 |
| No log | 5.0 | 185 | 0.4259 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-44 | hopkins | 2023-07-03T03:14:24Z | 67 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:56:32Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-44
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7625
- Bleu: 21.9586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-43 | hopkins | 2023-07-03T03:08:20Z | 70 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:50:25Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-43
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7586
- Bleu: 22.1541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
samzoozi/atari_game | samzoozi | 2023-07-03T03:04:22Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T03:03:41Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 718.00 +/- 220.55
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga samzoozi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga samzoozi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga samzoozi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Sourabh2/Cartpole-v2 | Sourabh2 | 2023-07-03T03:03:46Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-03T03:02:25Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
|
AshtakaOOf/ssambatea-locon | AshtakaOOf | 2023-07-03T02:58:58Z | 0 | 1 | null | [
"Text-to-Image",
"anime",
"lora",
"locon",
"lycoris",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-07-03T01:36:57Z | ---
license: cc-by-nc-sa-4.0
tags:
- Text-to-Image
- anime
- lora
- locon
- lycoris
---
# SSAMBAtea Style LoCon

## token: **ssambatea**
Trained on SSAMBAtea artwork
This is a LoCon and require the LyCORIS extension to work
I am planning on making a new improved dataset to do a V2
# License
[CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
hopkins/mbart-finetuned-eng-ind-42 | hopkins | 2023-07-03T02:57:04Z | 60 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:39:13Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-42
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7642
- Bleu: 21.7118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-44 | hopkins | 2023-07-03T02:56:05Z | 67 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:37:53Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-44
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6513
- Bleu: 20.8990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-43 | hopkins | 2023-07-03T02:49:57Z | 57 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:31:40Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-43
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6511
- Bleu: 20.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sankhajay/bert-base-sinhala-qa | sankhajay | 2023-07-03T02:46:25Z | 84 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | \n
---
language: si
tags:
- Sinhala
widget:
- context: "ශ්රී ලංකාව යනු ඉන්දියානු සාගරයේ පිහිටි මනරම් දුපතකි."
text: "ශ්රී ලංකාව පිහිටා ඇත්තේ කොහෙද ?"
---
# bert-base-sinhala-qa
This is a Bert-based Question Answering model for the Sinhalese language. Training is done on translated SQuAD dataset of 8k questions. Translation was done by google translated API. Evaluation is still to be done. Still fine-tuning the model. |
hopkins/mbart-finetuned-eng-kor-40 | hopkins | 2023-07-03T02:37:25Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:19:49Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-40
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9919
- Bleu: 7.0359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Rasith/NZappFineTune2 | Rasith | 2023-07-03T02:31:27Z | 31 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-03T02:31:01Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: NZappFineTune2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# NZappFineTune2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-kor-39 | hopkins | 2023-07-03T02:31:10Z | 53 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:13:29Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-39
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9925
- Bleu: 6.7954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
djifg/grow_classification_xlmr2 | djifg | 2023-07-03T02:28:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-03T01:59:42Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: grow_classification_xlmr2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# grow_classification_xlmr2
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5585
- Accuracy: 0.9309
- F1: 0.9297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2832 | 1.0 | 436 | 0.4686 | 0.8870 | 0.8872 |
| 0.0717 | 2.0 | 872 | 0.5915 | 0.8964 | 0.8950 |
| 0.0374 | 3.0 | 1308 | 0.4898 | 0.9276 | 0.9266 |
| 0.0205 | 4.0 | 1744 | 0.5333 | 0.9271 | 0.9257 |
| 0.0101 | 5.0 | 2180 | 0.5585 | 0.9309 | 0.9297 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-41 | hopkins | 2023-07-03T02:25:09Z | 68 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T02:07:22Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-41
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7619
- Bleu: 21.8317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AhmedTaha012/gptneo-TxtToJson-v0.1.16 | AhmedTaha012 | 2023-07-03T02:16:00Z | 79 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-03T01:43:59Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gptneo-TxtToJson-v0.1.16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gptneo-TxtToJson-v0.1.16
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1180
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 88 | 0.6397 |
| No log | 2.0 | 176 | 0.5158 |
| No log | 3.0 | 264 | 0.4083 |
| No log | 4.0 | 352 | 0.2929 |
| No log | 5.0 | 440 | 0.2384 |
| 0.3687 | 6.0 | 528 | 0.1904 |
| 0.3687 | 7.0 | 616 | 0.1638 |
| 0.3687 | 8.0 | 704 | 0.1485 |
| 0.3687 | 9.0 | 792 | 0.1405 |
| 0.3687 | 10.0 | 880 | 0.1277 |
| 0.3687 | 11.0 | 968 | 0.1232 |
| 0.0629 | 12.0 | 1056 | 0.1291 |
| 0.0629 | 13.0 | 1144 | 0.1159 |
| 0.0629 | 14.0 | 1232 | 0.1123 |
| 0.0629 | 15.0 | 1320 | 0.1160 |
| 0.0629 | 16.0 | 1408 | 0.1159 |
| 0.0629 | 17.0 | 1496 | 0.1195 |
| 0.0137 | 18.0 | 1584 | 0.1186 |
| 0.0137 | 19.0 | 1672 | 0.1179 |
| 0.0137 | 20.0 | 1760 | 0.1180 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
Bin12123/Chat | Bin12123 | 2023-07-03T02:11:49Z | 0 | 0 | null | [
"zh",
"dataset:fka/awesome-chatgpt-prompts",
"region:us"
] | null | 2023-07-03T02:10:05Z | ---
datasets:
- fka/awesome-chatgpt-prompts
language:
- zh
--- |
hopkins/mbart-finetuned-eng-deu-41 | hopkins | 2023-07-03T02:06:54Z | 63 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:48:43Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-41
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6499
- Bleu: 21.0780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-ind-38 | hopkins | 2023-07-03T02:06:04Z | 65 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:52:19Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-38
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7718
- Bleu: 21.7535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
digiplay/CityEdge_StyleMix_v1.44 | digiplay | 2023-07-03T02:03:34Z | 310 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-03T01:27:43Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/63243/cityedgestylemix
Sample images and prompt :
1girl, solo, long hair blown by wind,close-up ,long dress, green eyes, white stocking, lace, look at viewer, luxurious, elegant, extremely detailed, majestic, blurry, blurry background, tree, branch, cherry blossoms, butterfly, flower petals blown by wind, depth of field,

8k Angel sky,best quality , masterpiece, close up, ultra detailed ,upper body


|
Soojeong/female_hanbok_1e-7_ckpt | Soojeong | 2023-07-03T02:02:54Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-02T23:52:07Z |
---
license: creativeml-openrail-m
base_model: model/chilloutmix_NiPrunedFp16Fix
instance_prompt: a photo of wearing hanbok
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Soojeong/female_hanbok_1e-7_ckpt
This is a dreambooth model derived from model/chilloutmix_NiPrunedFp16Fix. The weights were trained on a photo of wearing hanbok using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
hopkins/mbart-finetuned-eng-deu-40 | hopkins | 2023-07-03T02:00:58Z | 70 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:42:43Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-40
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6497
- Bleu: 20.8437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-39 | hopkins | 2023-07-03T01:54:37Z | 62 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-03T01:36:24Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-39
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6512
- Bleu: 20.8213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yoona-J/Asr_Whisper_Degenerative_Brain | yoona-J | 2025-05-31T12:28:22Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:yoona-J/ASR_Preprocess_Degenerative_Brain_Dataset",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-27T02:33:55Z | ---
library_name: transformers
language:
- ko
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- yoona-J/ASR_Preprocess_Degenerative_Brain_Dataset
model-index:
- name: ASR_Whisper_Degenerative_Brain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASR_Whisper_Degenerative_Brain
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ASR_Preprocess_Degenerative_Brain_Dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3491
- Cer: 127.8138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 260
- training_steps: 2600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.0263 | 2.3148 | 500 | 0.3789 | 357.2301 |
| 0.0099 | 4.6296 | 1000 | 0.3568 | 102.9378 |
| 0.0016 | 6.9444 | 1500 | 0.3472 | 93.3995 |
| 0.0003 | 9.2593 | 2000 | 0.3499 | 133.6513 |
| 0.0002 | 11.5741 | 2500 | 0.3491 | 127.8138 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Arthur-Tsai/ht-stmini-cls-v7_ftis_noPretrain | Arthur-Tsai | 2025-05-31T12:28:19Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hierarchical-transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-05-23T13:19:10Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ht-stmini-cls-v7_ftis_noPretrain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ht-stmini-cls-v7_ftis_noPretrain
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2376
- Accuracy: 0.8969
- Macro F1: 0.7414
- Major Tenk F1: 0.7744
- Major Tenq F1: 0.7489
- Tenk 1a F1: 0.6662
- Tenq 1a F1: 0.5947
- Overall Metrics: 0.7354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 6733
- training_steps: 134675
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 | Major Tenk F1 | Major Tenq F1 | Tenk 1a F1 | Tenq 1a F1 | Overall Metrics |
|:-------------:|:--------:|:-----:|:---------------:|:--------:|:--------:|:-------------:|:-------------:|:----------:|:----------:|:---------------:|
| No log | 1.0002 | 200 | 60.2943 | 0.0225 | 0.0151 | 0.0142 | 0.0154 | 0.0059 | 0.0 | 0.0124 |
| No log | 2.0004 | 400 | 141.6390 | 0.3322 | 0.0925 | 0.0502 | 0.1381 | 0.0 | 0.0 | 0.0753 |
| 21.688 | 3.0006 | 600 | 162.2177 | 0.5184 | 0.1286 | 0.0950 | 0.1739 | 0.0 | 0.0 | 0.1076 |
| 21.688 | 4.0009 | 800 | 139.8578 | 0.5642 | 0.1388 | 0.1062 | 0.1803 | 0.0 | 0.0 | 0.1146 |
| 4.9921 | 5.0011 | 1000 | 93.6114 | 0.5745 | 0.1453 | 0.1099 | 0.1899 | 0.0 | 0.0 | 0.1199 |
| 4.9921 | 7.0000 | 1200 | 74.8003 | 0.5917 | 0.1549 | 0.1126 | 0.2065 | 0.0 | 0.0 | 0.1276 |
| 4.9921 | 8.0002 | 1400 | 57.5668 | 0.6116 | 0.1627 | 0.1174 | 0.2167 | 0.0 | 0.0 | 0.1336 |
| 3.2854 | 9.0005 | 1600 | 39.4401 | 0.6144 | 0.1588 | 0.1177 | 0.2092 | 0.0 | 0.0 | 0.1308 |
| 3.2854 | 10.0007 | 1800 | 31.5065 | 0.6143 | 0.1777 | 0.1202 | 0.2428 | 0.0 | 0.0 | 0.1452 |
| 2.4392 | 11.0009 | 2000 | 26.1489 | 0.5908 | 0.1689 | 0.1185 | 0.2271 | 0.0 | 0.0 | 0.1382 |
| 2.4392 | 12.0011 | 2200 | 22.1359 | 0.6248 | 0.1826 | 0.1283 | 0.2447 | 0.0 | 0.0 | 0.1492 |
| 2.4392 | 14.0000 | 2400 | 17.6913 | 0.6463 | 0.1999 | 0.1410 | 0.2687 | 0.0 | 0.0 | 0.1639 |
| 2.1463 | 15.0003 | 2600 | 12.0593 | 0.6448 | 0.2126 | 0.1463 | 0.2857 | 0.0 | 0.0 | 0.1728 |
| 2.1463 | 16.0005 | 2800 | 13.5925 | 0.6532 | 0.2266 | 0.1664 | 0.2954 | 0.0 | 0.0 | 0.1847 |
| 1.9448 | 17.0007 | 3000 | 13.0662 | 0.6553 | 0.2519 | 0.1616 | 0.3484 | 0.0 | 0.0 | 0.2040 |
| 1.9448 | 18.0009 | 3200 | 10.4446 | 0.6703 | 0.2643 | 0.1857 | 0.3509 | 0.0 | 0.0004 | 0.2147 |
| 1.9448 | 19.0011 | 3400 | 10.4530 | 0.6761 | 0.2688 | 0.2076 | 0.3397 | 0.0 | 0.0 | 0.2189 |
| 1.7499 | 21.0001 | 3600 | 9.7062 | 0.6885 | 0.3033 | 0.2279 | 0.3903 | 0.0 | 0.0 | 0.2473 |
| 1.7499 | 22.0003 | 3800 | 6.4853 | 0.6894 | 0.3144 | 0.2400 | 0.4008 | 0.0 | 0.0 | 0.2563 |
| 1.5579 | 23.0005 | 4000 | 7.9749 | 0.7174 | 0.3625 | 0.2720 | 0.4657 | 0.0 | 0.0 | 0.2951 |
| 1.5579 | 24.0007 | 4200 | 8.4882 | 0.7200 | 0.3543 | 0.2920 | 0.4279 | 0.0 | 0.0 | 0.2880 |
| 1.5579 | 25.0009 | 4400 | 7.7012 | 0.7380 | 0.3877 | 0.3205 | 0.4692 | 0.0 | 0.0 | 0.3159 |
| 1.338 | 26.0011 | 4600 | 8.2728 | 0.7454 | 0.3937 | 0.3210 | 0.4806 | 0.0 | 0.0 | 0.3206 |
| 1.338 | 28.0001 | 4800 | 8.5586 | 0.7388 | 0.4182 | 0.3381 | 0.5107 | 0.0001 | 0.0029 | 0.3398 |
| 1.1503 | 29.0003 | 5000 | 8.2114 | 0.7693 | 0.4468 | 0.3818 | 0.5273 | 0.0 | 0.0051 | 0.3642 |
| 1.1503 | 30.0005 | 5200 | 8.0675 | 0.7713 | 0.4557 | 0.3795 | 0.5497 | 0.0 | 0.0047 | 0.3721 |
| 1.1503 | 31.0007 | 5400 | 7.8441 | 0.7800 | 0.4641 | 0.3979 | 0.5508 | 0.0001 | 0.0040 | 0.3799 |
| 1.0307 | 32.0010 | 5600 | 7.6817 | 0.7760 | 0.4796 | 0.4220 | 0.5594 | 0.0001 | 0.0090 | 0.3935 |
| 1.0307 | 33.0012 | 5800 | 8.8320 | 0.7841 | 0.4912 | 0.4469 | 0.5560 | 0.0002 | 0.0036 | 0.4015 |
| 0.9085 | 35.0001 | 6000 | 9.4553 | 0.7792 | 0.4797 | 0.4218 | 0.5568 | 0.0001 | 0.0198 | 0.3934 |
| 0.9085 | 36.0003 | 6200 | 9.1531 | 0.7791 | 0.4618 | 0.3865 | 0.5587 | 0.0001 | 0.0330 | 0.3814 |
| 0.9085 | 37.0005 | 6400 | 9.1621 | 0.7918 | 0.5005 | 0.4659 | 0.5635 | 0.0006 | 0.0069 | 0.4125 |
| 0.7959 | 38.0008 | 6600 | 9.8166 | 0.7943 | 0.5153 | 0.4680 | 0.5834 | 0.0006 | 0.0080 | 0.4214 |
| 0.7959 | 39.0010 | 6800 | 9.9330 | 0.8043 | 0.5294 | 0.4945 | 0.5894 | 0.0088 | 0.0978 | 0.4443 |
| 0.6961 | 40.0012 | 7000 | 9.3400 | 0.7860 | 0.5264 | 0.4891 | 0.5876 | 0.0008 | 0.0991 | 0.4407 |
| 0.6961 | 42.0001 | 7200 | 11.7531 | 0.8062 | 0.5538 | 0.5154 | 0.6181 | 0.0009 | 0.2156 | 0.4751 |
| 0.6961 | 43.0003 | 7400 | 11.3273 | 0.8137 | 0.5485 | 0.5177 | 0.6068 | 0.0027 | 0.2208 | 0.4722 |
| 0.595 | 44.0006 | 7600 | 12.6001 | 0.8159 | 0.5460 | 0.5359 | 0.5851 | 0.0358 | 0.1687 | 0.4688 |
| 0.595 | 45.0008 | 7800 | 14.1394 | 0.8200 | 0.5751 | 0.5445 | 0.6382 | 0.0157 | 0.4749 | 0.5221 |
| 0.494 | 46.0010 | 8000 | 14.8009 | 0.8223 | 0.5710 | 0.5462 | 0.6252 | 0.0918 | 0.2653 | 0.5042 |
| 0.494 | 47.0012 | 8200 | 18.1555 | 0.8165 | 0.5805 | 0.5647 | 0.6262 | 0.0164 | 0.4047 | 0.5185 |
| 0.494 | 49.0002 | 8400 | 15.0481 | 0.8369 | 0.6079 | 0.5907 | 0.6562 | 0.1847 | 0.4428 | 0.5615 |
| 0.4171 | 50.0004 | 8600 | 18.7540 | 0.8367 | 0.6055 | 0.5920 | 0.6492 | 0.2057 | 0.2158 | 0.5386 |
| 0.4171 | 51.0006 | 8800 | 18.3707 | 0.8420 | 0.6105 | 0.6057 | 0.6459 | 0.3187 | 0.2324 | 0.5558 |
| 0.3619 | 52.0008 | 9000 | 17.3268 | 0.8446 | 0.6136 | 0.6102 | 0.6483 | 0.3540 | 0.3821 | 0.5770 |
| 0.3619 | 53.0010 | 9200 | 16.2734 | 0.8427 | 0.6127 | 0.6024 | 0.6539 | 0.3241 | 0.3977 | 0.5747 |
| 0.3619 | 54.0012 | 9400 | 18.4874 | 0.8434 | 0.6201 | 0.6139 | 0.6562 | 0.3200 | 0.2877 | 0.5688 |
| 0.3059 | 56.0002 | 9600 | 19.9005 | 0.8554 | 0.6398 | 0.6449 | 0.6679 | 0.4471 | 0.3584 | 0.6056 |
| 0.3059 | 57.0004 | 9800 | 17.0130 | 0.8483 | 0.6326 | 0.6269 | 0.6699 | 0.3174 | 0.4245 | 0.5929 |
| 0.2707 | 58.0006 | 10000 | 17.9518 | 0.8536 | 0.6376 | 0.6487 | 0.6586 | 0.4098 | 0.2963 | 0.5936 |
| 0.2707 | 59.0008 | 10200 | 15.1840 | 0.8661 | 0.6574 | 0.6579 | 0.6904 | 0.5025 | 0.5448 | 0.6440 |
| 0.2707 | 60.0010 | 10400 | 15.4746 | 0.8706 | 0.6635 | 0.6728 | 0.6914 | 0.5931 | 0.5510 | 0.6601 |
| 0.2341 | 61.0013 | 10600 | 14.0148 | 0.8584 | 0.6463 | 0.6579 | 0.6712 | 0.4193 | 0.3969 | 0.6133 |
| 0.2341 | 63.0002 | 10800 | 12.9279 | 0.8635 | 0.6628 | 0.6617 | 0.6968 | 0.3349 | 0.4999 | 0.6269 |
| 0.2059 | 64.0004 | 11000 | 12.1362 | 0.8730 | 0.6717 | 0.6885 | 0.6904 | 0.6967 | 0.4194 | 0.6632 |
| 0.2059 | 65.0006 | 11200 | 13.5371 | 0.8574 | 0.6535 | 0.6751 | 0.6657 | 0.3640 | 0.2579 | 0.5985 |
| 0.2059 | 66.0008 | 11400 | 12.4233 | 0.8637 | 0.6681 | 0.6709 | 0.6988 | 0.4242 | 0.4582 | 0.6362 |
| 0.1802 | 67.0011 | 11600 | 12.5736 | 0.8692 | 0.6750 | 0.7027 | 0.6832 | 0.5821 | 0.3291 | 0.6455 |
| 0.1802 | 69.0000 | 11800 | 12.3831 | 0.8647 | 0.6708 | 0.6829 | 0.6978 | 0.3605 | 0.3358 | 0.6219 |
| 0.1613 | 70.0002 | 12000 | 11.4587 | 0.8700 | 0.6787 | 0.6890 | 0.7039 | 0.3716 | 0.5116 | 0.6454 |
| 0.1613 | 71.0004 | 12200 | 10.6388 | 0.8755 | 0.6804 | 0.6946 | 0.7020 | 0.4094 | 0.3987 | 0.6395 |
| 0.1613 | 72.0007 | 12400 | 9.2034 | 0.8661 | 0.6779 | 0.6833 | 0.7060 | 0.2094 | 0.3523 | 0.6119 |
| 0.1384 | 73.0009 | 12600 | 9.3546 | 0.8744 | 0.6875 | 0.7126 | 0.6985 | 0.4567 | 0.2597 | 0.6361 |
| 0.1384 | 74.0011 | 12800 | 8.8133 | 0.8748 | 0.6863 | 0.7099 | 0.6990 | 0.4269 | 0.3517 | 0.6414 |
| 0.1228 | 76.0000 | 13000 | 7.0694 | 0.8816 | 0.7112 | 0.7286 | 0.7310 | 0.5915 | 0.5744 | 0.7005 |
| 0.1228 | 77.0002 | 13200 | 7.2902 | 0.8751 | 0.6930 | 0.7201 | 0.7023 | 0.4182 | 0.3034 | 0.6411 |
| 0.1228 | 78.0005 | 13400 | 7.2316 | 0.8833 | 0.7111 | 0.7476 | 0.7128 | 0.6047 | 0.3758 | 0.6822 |
| 0.111 | 79.0007 | 13600 | 6.6328 | 0.8864 | 0.7041 | 0.7409 | 0.7048 | 0.7138 | 0.3389 | 0.6836 |
| 0.111 | 80.0009 | 13800 | 7.7052 | 0.8694 | 0.6921 | 0.7142 | 0.7067 | 0.3976 | 0.4348 | 0.6516 |
| 0.0971 | 81.0011 | 14000 | 6.9464 | 0.8859 | 0.7058 | 0.7427 | 0.7068 | 0.6306 | 0.3071 | 0.6736 |
| 0.0971 | 83.0001 | 14200 | 5.9661 | 0.8843 | 0.7073 | 0.7303 | 0.7200 | 0.4305 | 0.4534 | 0.6685 |
| 0.0971 | 84.0003 | 14400 | 6.3033 | 0.8787 | 0.7047 | 0.7347 | 0.7116 | 0.4091 | 0.2989 | 0.6493 |
| 0.0894 | 85.0005 | 14600 | 5.0356 | 0.8849 | 0.7119 | 0.7475 | 0.7128 | 0.6141 | 0.2557 | 0.6711 |
| 0.0894 | 86.0007 | 14800 | 4.8990 | 0.8822 | 0.7090 | 0.7234 | 0.7301 | 0.3521 | 0.4437 | 0.6610 |
| 0.0802 | 87.0009 | 15000 | 5.8001 | 0.8829 | 0.7175 | 0.7379 | 0.7344 | 0.4172 | 0.5317 | 0.6838 |
| 0.0802 | 88.0011 | 15200 | 5.1187 | 0.8841 | 0.7134 | 0.7395 | 0.7264 | 0.4743 | 0.4037 | 0.6742 |
| 0.0802 | 90.0001 | 15400 | 4.9082 | 0.8807 | 0.7124 | 0.7457 | 0.7175 | 0.5101 | 0.3641 | 0.6727 |
| 0.0727 | 91.0003 | 15600 | 4.9647 | 0.8913 | 0.7227 | 0.7518 | 0.7314 | 0.5259 | 0.4214 | 0.6880 |
| 0.0727 | 92.0005 | 15800 | 4.8705 | 0.8835 | 0.7131 | 0.7541 | 0.7096 | 0.5533 | 0.1895 | 0.6598 |
| 0.0666 | 93.0007 | 16000 | 4.6300 | 0.8912 | 0.7267 | 0.7554 | 0.7360 | 0.4507 | 0.4593 | 0.6875 |
| 0.0666 | 94.0009 | 16200 | 4.8401 | 0.8874 | 0.7171 | 0.7510 | 0.7210 | 0.5114 | 0.3152 | 0.6715 |
| 0.0666 | 95.0012 | 16400 | 4.1930 | 0.8969 | 0.7414 | 0.7744 | 0.7489 | 0.6662 | 0.5947 | 0.7354 |
| 0.0628 | 97.0001 | 16600 | 4.3157 | 0.8882 | 0.7261 | 0.7470 | 0.7413 | 0.4399 | 0.4493 | 0.6842 |
| 0.0628 | 98.0003 | 16800 | 4.7483 | 0.8839 | 0.7148 | 0.7503 | 0.7172 | 0.4050 | 0.2188 | 0.6494 |
| 0.056 | 99.0005 | 17000 | 4.7388 | 0.8913 | 0.7360 | 0.7608 | 0.7492 | 0.4436 | 0.5440 | 0.7028 |
| 0.056 | 100.0007 | 17200 | 4.4095 | 0.8889 | 0.7274 | 0.7566 | 0.7348 | 0.4543 | 0.3835 | 0.6803 |
| 0.056 | 101.0010 | 17400 | 4.2056 | 0.8911 | 0.7293 | 0.7665 | 0.7311 | 0.5919 | 0.4441 | 0.7026 |
| 0.0533 | 102.0012 | 17600 | 4.0501 | 0.8868 | 0.7238 | 0.7565 | 0.7289 | 0.4222 | 0.3382 | 0.6702 |
| 0.0533 | 104.0001 | 17800 | 4.5502 | 0.8946 | 0.7399 | 0.7766 | 0.7419 | 0.5764 | 0.4422 | 0.7093 |
| 0.0474 | 105.0003 | 18000 | 4.6902 | 0.8946 | 0.7401 | 0.7662 | 0.7521 | 0.5786 | 0.5432 | 0.7195 |
| 0.0474 | 106.0005 | 18200 | 4.4638 | 0.8936 | 0.7408 | 0.7686 | 0.7529 | 0.4870 | 0.4479 | 0.7021 |
| 0.0474 | 107.0008 | 18400 | 5.0829 | 0.8929 | 0.7458 | 0.7793 | 0.7511 | 0.5595 | 0.4773 | 0.7159 |
| 0.0456 | 108.0010 | 18600 | 4.1551 | 0.8917 | 0.7368 | 0.7549 | 0.7569 | 0.4010 | 0.5099 | 0.6958 |
| 0.0456 | 109.0012 | 18800 | 4.0477 | 0.8837 | 0.7214 | 0.7482 | 0.7308 | 0.2906 | 0.2506 | 0.6457 |
| 0.0425 | 111.0001 | 19000 | 3.6356 | 0.8841 | 0.7258 | 0.7595 | 0.7292 | 0.3076 | 0.2376 | 0.6500 |
| 0.0425 | 112.0004 | 19200 | 3.9311 | 0.8936 | 0.7414 | 0.7667 | 0.7535 | 0.4939 | 0.4554 | 0.7030 |
| 0.0425 | 113.0006 | 19400 | 3.6943 | 0.8903 | 0.7341 | 0.7518 | 0.7527 | 0.3080 | 0.4538 | 0.6780 |
| 0.0415 | 114.0008 | 19600 | 3.8855 | 0.8887 | 0.7327 | 0.7660 | 0.7371 | 0.3134 | 0.3110 | 0.6637 |
| 0.0415 | 115.0010 | 19800 | 4.0244 | 0.8910 | 0.7400 | 0.7720 | 0.7470 | 0.3838 | 0.3959 | 0.6855 |
| 0.039 | 116.0012 | 20000 | 3.5484 | 0.8915 | 0.7374 | 0.7601 | 0.7520 | 0.4107 | 0.4504 | 0.6910 |
| 0.039 | 118.0002 | 20200 | 3.8694 | 0.8917 | 0.7375 | 0.7687 | 0.7443 | 0.4742 | 0.3893 | 0.6916 |
| 0.039 | 119.0004 | 20400 | 3.8993 | 0.8961 | 0.7433 | 0.7783 | 0.7479 | 0.5076 | 0.3940 | 0.7006 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.20.1
|
Mambooq/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hardy_hunting_shrew | Mambooq | 2025-05-31T12:27:55Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am hardy hunting shrew",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-06T22:12:33Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hardy_hunting_shrew
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am hardy hunting shrew
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hardy_hunting_shrew
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Mambooq/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-hardy_hunting_shrew", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
posb/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_stealthy_chicken | posb | 2025-05-31T12:27:47Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am grazing stealthy chicken",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T07:11:07Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_stealthy_chicken
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am grazing stealthy chicken
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_stealthy_chicken
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="posb/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_stealthy_chicken", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
deeprajb/qwen2-7b-instruct-trl-sft-ChartQA | deeprajb | 2025-05-31T12:27:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T04:52:01Z | ---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-ChartQA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="deeprajb/qwen2-7b-instruct-trl-sft-ChartQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/deepraj-basu-deepraj/qwen2-7b-instruct-trl-sft-ChartQA/runs/nv93nv7o)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mesolitica/Malaysian-Qwen2.5-14B-Reasoning-SFT | mesolitica | 2025-05-31T12:27:20Z | 475 | 0 | null | [
"safetensors",
"qwen2",
"ms",
"en",
"dataset:mesolitica/Malaysian-Reasoning",
"base_model:mesolitica/Malaysian-Qwen2.5-14B-Instruct",
"base_model:finetune:mesolitica/Malaysian-Qwen2.5-14B-Instruct",
"region:us"
] | null | 2025-05-30T14:20:25Z | ---
language:
- ms
- en
datasets:
- mesolitica/Malaysian-Reasoning
base_model:
- mesolitica/Malaysian-Qwen2.5-14B-Instruct
---
# Malaysian Qwen 2.5 14B Instruct Reasoning SFT
Continue finetuning https://huggingface.co/mesolitica/Malaysian-Qwen2.5-14B-Instruct on highly curated Malaysian Reasoning dataset.
## Improvement
1. Reasoning on Math, Science, Translation, Dialects, Multiple choices, coding and Maktabah Al Bakri.
2. Warmup reasoning.
## Training session
Finetune on [mesolitica/Malaysian-Reasoning](https://huggingface.co/datasets/mesolitica/Malaysian-Reasoning) to make the model better reasoning on Malaysian context.
## How we train
1. Full parameters on 12k context length.
5. WanDB at https://wandb.ai/huseinzol05/fpf-qwen2.5-14b-malaysian-12k-reasoning
Source code at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5
## Benchmark
### Dialect Translation
All the benchmarks generate using vLLM, evaluation based on sacrebleu CHRF max@5.
Source code for evaluation at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5/evaluate-dialect
Dialect to standard Malay,
```
```
Standard Malay to dialect,
```
```
### MalayMMLU
## Special thanks
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
Asib1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_leggy_ant | Asib1 | 2025-05-31T12:27:09Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pensive leggy ant",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T07:08:10Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_leggy_ant
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pensive leggy ant
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_leggy_ant
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Asib1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_leggy_ant", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Snarcy/mit-b5_train_002 | Snarcy | 2025-05-31T12:26:33Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/mit-b5",
"base_model:finetune:nvidia/mit-b5",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T20:27:10Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b5
tags:
- generated_from_trainer
model-index:
- name: mit-b5_train_002
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mit-b5_train_002
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0143
- Mean Iou: 0.8248
- Mean Accuracy: 0.9590
- Overall Accuracy: 0.9947
- Per Category Iou: [0.9946274398771559, 0.6549543170240412]
- Per Category Accuracy: [0.9954789380254505, 0.9226150557211305]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------:|:----------------------------------------:|
| 0.0073 | 1.3021 | 500 | 0.0224 | 0.7636 | 0.9577 | 0.9913 | [0.9911654702658369, 0.5360215318233884] | [0.9920058811276511, 0.9233559058587688] |
| 0.0048 | 2.6042 | 1000 | 0.0153 | 0.8080 | 0.9667 | 0.9938 | [0.9936973339154755, 0.622247747263589] | [0.9943669459284502, 0.9390880764286181] |
| 0.0047 | 3.9062 | 1500 | 0.0167 | 0.7923 | 0.9567 | 0.9931 | [0.9929831515302573, 0.591592358672035] | [0.9938665668944121, 0.9195814438830231] |
| 0.0035 | 5.2083 | 2000 | 0.0165 | 0.8020 | 0.9528 | 0.9936 | [0.9935725427030164, 0.6103453701645987] | [0.9945513271119594, 0.9109527187505296] |
| 0.0029 | 6.5104 | 2500 | 0.0155 | 0.8077 | 0.9701 | 0.9937 | [0.9936389784108585, 0.6218279443265041] | [0.9942323562353744, 0.9460196252654107] |
| 0.0034 | 7.8125 | 3000 | 0.0156 | 0.8081 | 0.9572 | 0.9939 | [0.9938343077894927, 0.6224496651443722] | [0.9947175487309102, 0.9196661816438642] |
| 0.0036 | 9.1146 | 3500 | 0.0128 | 0.8306 | 0.9676 | 0.9949 | [0.994798610396789, 0.6663742338661218] | [0.9954619569577072, 0.9397248201743661] |
| 0.0029 | 10.4167 | 4000 | 0.0156 | 0.8124 | 0.9575 | 0.9941 | [0.9940459517810805, 0.6307369825637553] | [0.9949244552739969, 0.9201140812368808] |
| 0.0028 | 11.7188 | 4500 | 0.0144 | 0.8178 | 0.9564 | 0.9944 | [0.9943262630738512, 0.6412632077626537] | [0.9952325518390617, 0.9176106856737499] |
| 0.0039 | 13.0208 | 5000 | 0.0149 | 0.8176 | 0.9592 | 0.9943 | [0.9942831929104334, 0.6409690149177474] | [0.9951264603416939, 0.9233365372277195] |
| 0.0033 | 14.3229 | 5500 | 0.0162 | 0.8063 | 0.9605 | 0.9938 | [0.9936963401905434, 0.6189906956881827] | [0.9945047497037487, 0.9264621500633112] |
| 0.0033 | 15.625 | 6000 | 0.0148 | 0.8226 | 0.9633 | 0.9945 | [0.9944719331327413, 0.6506424730773589] | [0.9952273021714313, 0.9313406240088709] |
| 0.0034 | 16.9271 | 6500 | 0.0136 | 0.8258 | 0.9545 | 0.9948 | [0.9947271869886563, 0.6568670813337141] | [0.995681425205481, 0.9132866387919785] |
| 0.0018 | 18.2292 | 7000 | 0.0150 | 0.8201 | 0.9601 | 0.9945 | [0.9943947431923962, 0.6458864841743635] | [0.9952207133028748, 0.9249175017371241] |
| 0.0036 | 19.5312 | 7500 | 0.0143 | 0.8248 | 0.9590 | 0.9947 | [0.9946274398771559, 0.6549543170240412] | [0.9954789380254505, 0.9226150557211305] |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
zyzzc/Gewwa-2-9B-v39-Q4_K_S-GGUF | zyzzc | 2025-05-31T12:26:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:zyzzc/Gewwa-2-9B-v39",
"base_model:quantized:zyzzc/Gewwa-2-9B-v39",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T12:25:47Z | ---
base_model: zyzzc/Gewwa-2-9B-v39
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# zyzzc/Gewwa-2-9B-v39-Q4_K_S-GGUF
This model was converted to GGUF format from [`zyzzc/Gewwa-2-9B-v39`](https://huggingface.co/zyzzc/Gewwa-2-9B-v39) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zyzzc/Gewwa-2-9B-v39) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zyzzc/Gewwa-2-9B-v39-Q4_K_S-GGUF --hf-file gewwa-2-9b-v39-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zyzzc/Gewwa-2-9B-v39-Q4_K_S-GGUF --hf-file gewwa-2-9b-v39-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zyzzc/Gewwa-2-9B-v39-Q4_K_S-GGUF --hf-file gewwa-2-9b-v39-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zyzzc/Gewwa-2-9B-v39-Q4_K_S-GGUF --hf-file gewwa-2-9b-v39-q4_k_s.gguf -c 2048
```
|
marinroumain/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hairy_majestic_badger | marinroumain | 2025-05-31T12:25:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am hairy majestic badger",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T11:07:10Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hairy_majestic_badger
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am hairy majestic badger
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hairy_majestic_badger
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="marinroumain/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hairy_majestic_badger", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ariianaa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gilded_furry_cheetah | ariianaa | 2025-05-31T12:25:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am gilded furry cheetah",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T04:50:39Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gilded_furry_cheetah
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am gilded furry cheetah
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gilded_furry_cheetah
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ariianaa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gilded_furry_cheetah", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Navruz21/Gemma-2-2b-it-ChatDoctor | Navruz21 | 2025-05-31T12:24:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-31T11:44:37Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Turalalyv/teamid-t5-football | Turalalyv | 2025-05-31T12:24:24Z | 19 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-22T13:06:59Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: teamid-t5-football
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# teamid-t5-football
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
bxod/Llama-3.2-1B-Instruct-uz | bxod | 2025-05-31T12:23:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"uzbek",
"uzbekllm",
"uzbeknlp",
"translation",
"summarization",
"question-answering",
"tokenizer",
"conversational",
"uz",
"en",
"dataset:tahrirchi/uz-crawl",
"dataset:tahrirchi/uz-books",
"dataset:yakhyo/uz-wiki",
"dataset:wikipedia",
"dataset:tatsu-lab/alpaca",
"dataset:behbudiy/alpaca-cleaned-uz",
"dataset:UAzimov/uzbek-instruct-llm",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T12:01:35Z | ---
license: llama3.2
language:
- uz
- en
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
tags:
- llama
- uzbek
- uzbekllm
- uzbeknlp
- text-generation
- translation
- summarization
- question-answering
- tokenizer
datasets:
- tahrirchi/uz-crawl
- tahrirchi/uz-books
- yakhyo/uz-wiki
- wikipedia
- tatsu-lab/alpaca
- behbudiy/alpaca-cleaned-uz
- UAzimov/uzbek-instruct-llm
metrics:
- bleu
- comet
- accuracy
pipeline_tag: text-generation
---
### Model Description
Our **Llama-3.2-1B-Instruct-uz** (experimental) model has been continually pretrained with batch size of 2048 tokens, on 1.2B tokens (80% English, 20% Uzbek), then SFT fine-tuned. Our customized tokenizer averages 1.7 tokens per Uzbek word vs. ~3.5 in the original Llama models, meaning 2x faster inference and longer effective context length on Uzbek text. You’ll be able to run this model on just 2 GB of VRAM (with quantization), perfect for small GPUs, edge devices, or even mobile scenarios.
---
### Benchmarks
| Model | BLEU Uz→En (Zero_shot) | BLEU En→Uz (Zero_shot) | COMET Uz→En | COMET En→Uz | Uzbek Sentiment Analysis | Uzbek News Classification | MMLU (English) (Zero_shot) |
| --------------------------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: |
| **Llama-3.2 1B Instruct** | 3.62 | 0.44 | 56.72 | 35.52 | 54.77 | 42.16 | 38.15 |
| **Llama-3.2 1B Instruct Uz** | 10.33 | 5.29 | 74.39 | 72.34 | 65.25 | 17.14 | 27.20 |
| **Llama-3.2 3B Instruct** | 11.91 | 2.54 | 71.96 | 55.62 | 56.01 | 70.60 | 52.04 |
| **Llama-3.2 3B Instruct Uz** | 20.47 | **9.18** | **83.20** | 80.71 | **77.55** | 41.43 | 45.91 |
| **Llama-3.1 8B Instruct** | **24.23** | 8.28 | 83.12 | **82.22** | 69.77 | **73.63** | **60.59** |
The results show that our Uzbek-optimized models consistently outperform their base counterparts in translation benchmarks (BLEU and COMET) on the FLORES+ Uz-En / En-Uz evaluation datasets and sentiment analysis in Uzbek language. Also, on the MMLU benchmark, which measures general language understanding across multiple tasks in English, and News classification tasks, our Uzbek optimized model showed slight decline because of catastrophic forgetting of original English instruction following. (The official Llama model’s MMLU score may differ from our score due to our evaluation method. Refer to the links below to see evaluation details.)
Looking ahead, these models are only **experimental checkpoints** with a room for improvement. We’re eager to see how these models will contribute to Uzbek open-source and be used by our Uzbek 🇺🇿 community. 🚀
## How to use
The Llama-3.2-1B-Instruct-uz model can be used with transformers in the following way. We recommend preprocessing Uzbek input to replace apostrophe (') with sequence (APST) to achieve our model's lower tokenizer fertility.
### Use with transformers
```python
import re, torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import langid
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
DTYPE = torch.bfloat16
MODEL_ID = "bxod/Llama-3.2-1B-Instruct-uz"
PATTERN = r"[’‘‚‛ʻʼʽʾʿˈˊˋˌˍ'\']"
tok = AutoTokenizer.from_pretrained(MODEL_ID, use_fast=True)
tok.padding_side = "left"
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=DTYPE,
device_map="auto"
)
EOT = "<|eot_id|>"
SYSTEM = (
f"{tok.bos_token}<|start_header_id|>system<|end_header_id|>\n"
"You are a helpful assistant<|eot_id|>"
)
def prompt(user: str) -> str:
return (
SYSTEM +
"<|start_header_id|>user<|end_header_id|>\n" +
f"{user}{EOT}" +
"<|start_header_id|>assistant<|end_header_id|>"
)
def generate(user: str, max_new: int = 256) -> str:
lang, confidence = langid.classify(user)
clean_text = re.sub(PATTERN, "APST", text) if lang != "en" else text
enc = tok(prompt(clean_text), return_tensors="pt").to(DEVICE)
out = model.generate(**enc,
max_new_tokens=max_new,
bos_token_id=tok.bos_token_id,
eos_token_id=tok.convert_tokens_to_ids(EOT),
pad_token_id=tok.pad_token_id,
do_sample=False)
txt = tok.decode(out[0], skip_special_tokens=False)
txt = txt.split("<|start_header_id|>assistant<|end_header_id|>", 1)[1]
return txt.split(EOT, 1)[0].replace("APST", "'").strip()
print(generate("Menga Alisher Navoiy haqida aytib ber."))
```
## Information on Evaluation Method
To evaluate on the translation task, we used FLORES+ Uz-En / En-Uz datasets.
We used the following prompt to do zero-shot Uz-En evaluation both for the base model and Uzbek-optimized model (for En-Uz eval, we changed the positions of the words "English" and "Uzbek").
```python
prompt = f"Input: {clean_text} \n\nYour task is to accurately translate the given Uzbek text into English.\n"
"Output only the English translation, without any additional comments.\n"
"\nPlease translate the following Uzbek text into English."
```
To assess the model's ability in Uzbek sentiment analysis, we used the **risqaliyevds/uzbek-sentiment-analysis** dataset (refer to **behbudiy/uzbek-sentiment-analysis** dataset).
We used the following prompt for the evaluation:
```python
prompt = f'''Input: {clean_text} \n\nGiven the following text, determine the sentiment as either 'Positive' or 'Negative.' Respond with only the word 'Positive' or 'Negative' without any additional text or explanation."
'''
```
For Uzbek News Classification, we used **risqaliyevds/uzbek-zero-shot-classification** dataset and asked the model to predict the category of the news using the following prompt:
```python
prompt = f'''Input: {clean_text}\n\nClassify the given news article in Uzbek.
0 - Siyosat - If the text is about politics.
1 - Iqtisodiyot - If the text is about the economy.
2 - Texnologiya - If the text is about technology.
3 - Sport - If the text is about sports.
4 - Madaniyat - If the text is about culture.
5 - Salomatlik - If the text is about health.
6 - Oila va Jamiyat - If the text is about family and society.
7 - TaAPSTlim - If the text is about education.
8 - Ekologiya - If the text is about ecology.
9 - Xorijiy Yangiliklar - If the text is about foreign news.
Print only one digit ID of the corresponding class.
'''
```
On MMLU, we performed 0-shot evaluation using the following **template** and extracted the first token generated by the model for measuring accuracy:
```python
template = "Given the above question and choices, choose the single best answer (A, B, C, or D). Respond with only one letter..
```
## More
For more details and examples, refer to the base model below:
https://huggingface.co/meta-llama/Meta-Llama-3.2-1B-Instruct
|
yazidsupriadi/indo_lstm_bot | yazidsupriadi | 2025-05-31T12:23:31Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-23T09:24:05Z | # Training Log: IndoBERT + LSTM for Bot Detection
## Epoch 1 (2025-05-31T12:12:01.983029)
- Train Loss: 1.1424
- Validation Accuracy: 0.8722
- ROC AUC Score: 0.9461
- Precision: 0.8978
- Recall: 0.8410
- F1 Score: 0.8685
## Epoch 2 (2025-05-31T12:17:42.616382)
- Train Loss: 0.2857
- Validation Accuracy: 0.8925
- ROC AUC Score: 0.9591
- Precision: 0.9117
- Recall: 0.8699
- F1 Score: 0.8903
## Epoch 3 (2025-05-31T12:23:16.089343)
- Train Loss: 0.2566
- Validation Accuracy: 0.8990
- ROC AUC Score: 0.9651
- Precision: 0.9220
- Recall: 0.8724
- F1 Score: 0.8965
## Epoch 4 (2025-05-31T12:28:50.390464)
- Train Loss: 0.2440
- Validation Accuracy: 0.9018
- ROC AUC Score: 0.9677
- Precision: 0.9051
- Recall: 0.8983
- F1 Score: 0.9017
|
yuexishen/codellama-7b-mbpp-ppo-qlora | yuexishen | 2025-05-31T12:23:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T04:50:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zagrodnik/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_huge_mole | Zagrodnik | 2025-05-31T12:23:17Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am nasty huge mole",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-08T18:30:41Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_huge_mole
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am nasty huge mole
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_huge_mole
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Zagrodnik/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_huge_mole", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
fty7i/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala | fty7i | 2025-05-31T12:23:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pensive powerful koala",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T02:44:33Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pensive powerful koala
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fty7i/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Baselhany/Distilation_Whisper_base_CKP2 | Baselhany | 2025-05-31T12:22:59Z | 16 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-23T01:32:19Z | ---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper base AR - BA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base AR - BA
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0954
- Wer: 0.2102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|
| 46.3716 | 0.2851 | 400 | 0.1697 | 0.6098 |
| 16.3556 | 0.5701 | 800 | 0.1355 | 0.3556 |
| 11.9327 | 0.8552 | 1200 | 0.1230 | 0.3000 |
| 8.1222 | 1.1397 | 1600 | 0.1196 | 0.2543 |
| 6.2775 | 1.4247 | 2000 | 0.1165 | 0.2619 |
| 5.6861 | 1.7098 | 2400 | 0.1143 | 0.2390 |
| 5.238 | 1.9948 | 2800 | 0.1115 | 0.2346 |
| 4.5097 | 2.2794 | 3200 | 0.1107 | 0.2256 |
| 3.9677 | 2.5644 | 3600 | 0.1095 | 0.2262 |
| 3.8998 | 2.8495 | 4000 | 0.1085 | 0.2300 |
| 3.3351 | 3.1340 | 4400 | 0.1067 | 0.2140 |
| 3.1317 | 3.4190 | 4800 | 0.1067 | 0.2199 |
| 2.9814 | 3.7041 | 5200 | 0.1046 | 0.2119 |
| 3.167 | 3.9891 | 5600 | 0.1039 | 0.2104 |
| 2.498 | 4.2737 | 6000 | 0.1066 | 0.2177 |
| 2.8372 | 4.5587 | 6400 | 0.1022 | 0.2098 |
| 2.5573 | 4.8438 | 6800 | 0.1028 | 0.2181 |
| 2.3309 | 5.1283 | 7200 | 0.1006 | 0.2091 |
| 2.2589 | 5.4133 | 7600 | 0.1015 | 0.2100 |
| 2.1409 | 5.6984 | 8000 | 0.1024 | 0.2065 |
| 2.1048 | 5.9834 | 8400 | 0.0992 | 0.2138 |
| 1.8826 | 6.2679 | 8800 | 0.0987 | 0.2116 |
| 1.8778 | 6.5530 | 9200 | 0.0988 | 0.2073 |
| 2.0199 | 6.8381 | 9600 | 0.0981 | 0.2045 |
| 1.7238 | 7.1226 | 10000 | 0.0997 | 0.2022 |
| 1.8087 | 7.4076 | 10400 | 0.0983 | 0.2037 |
| 1.7075 | 7.6977 | 10800 | 0.0985 | 0.2059 |
| 1.7072 | 7.9827 | 11200 | 0.0977 | 0.2062 |
| 1.5864 | 8.2679 | 11600 | 0.0977 | 0.2066 |
| 1.6869 | 8.5530 | 12000 | 0.0972 | 0.2081 |
| 1.7383 | 8.8381 | 12400 | 0.0976 | 0.2041 |
| 1.4336 | 9.1226 | 12800 | 0.0970 | 0.2045 |
| 1.5429 | 9.4076 | 13200 | 0.0969 | 0.2010 |
| 1.5726 | 9.6927 | 13600 | 0.0969 | 0.2084 |
| 1.4709 | 9.9777 | 14000 | 0.0971 | 0.2044 |
| 1.5442 | 10.2637 | 14400 | 0.0978 | 0.2088 |
| 1.5764 | 10.5487 | 14800 | 0.0985 | 0.2151 |
| 1.6821 | 10.8338 | 15200 | 0.0970 | 0.2066 |
| 1.6529 | 11.1183 | 15600 | 0.0974 | 0.2082 |
| 1.5455 | 11.4033 | 16000 | 0.0971 | 0.2057 |
| 1.4845 | 11.6884 | 16400 | 0.0973 | 0.2140 |
| 1.4953 | 11.9735 | 16800 | 0.0960 | 0.2029 |
| 1.4349 | 12.2580 | 17200 | 0.0958 | 0.2009 |
| 1.4104 | 12.5430 | 17600 | 0.0974 | 0.2025 |
| 1.5073 | 12.8281 | 18000 | 0.0953 | 0.2044 |
| 1.2488 | 13.1126 | 18400 | 0.0949 | 0.1966 |
| 1.277 | 13.3976 | 18800 | 0.0955 | 0.2084 |
| 1.2443 | 13.6827 | 19200 | 0.0960 | 0.1995 |
| 1.3972 | 13.9678 | 19600 | 0.0955 | 0.2028 |
| 1.2847 | 14.2523 | 20000 | 0.0949 | 0.2034 |
| 1.3107 | 14.5373 | 20400 | 0.0951 | 0.2013 |
| 1.2232 | 14.8224 | 20800 | 0.0947 | 0.2003 |
| 1.2233 | 15.1069 | 21200 | 0.0949 | 0.1985 |
| 1.1999 | 15.3919 | 21600 | 0.0946 | 0.2025 |
| 1.236 | 15.6770 | 22000 | 0.0949 | 0.2029 |
| 1.2252 | 15.9621 | 22400 | 0.0945 | 0.1994 |
| 1.2094 | 16.2466 | 22800 | 0.0941 | 0.2050 |
| 1.2505 | 16.5316 | 23200 | 0.0941 | 0.2003 |
| 1.1193 | 16.8167 | 23600 | 0.0942 | 0.1991 |
| 1.1992 | 17.1062 | 24000 | 0.0946 | 0.2020 |
| 1.2794 | 17.3912 | 24400 | 0.0954 | 0.2118 |
| 1.2362 | 17.6763 | 24800 | 0.0948 | 0.2025 |
| 1.3528 | 17.9613 | 25200 | 0.0956 | 0.2070 |
| 1.1863 | 18.2459 | 25600 | 0.0935 | 0.2037 |
| 1.2936 | 18.5309 | 26000 | 0.0940 | 0.2032 |
| 1.2434 | 18.8160 | 26400 | 0.0938 | 0.2029 |
| 1.1254 | 19.1005 | 26800 | 0.0933 | 0.2026 |
| 1.2345 | 19.3855 | 27200 | 0.0934 | 0.2009 |
| 1.2177 | 19.6706 | 27600 | 0.0938 | 0.2037 |
| 1.1479 | 19.9556 | 28000 | 0.0938 | 0.2007 |
| 1.1077 | 20.2402 | 28400 | 0.0933 | 0.1995 |
| 1.1615 | 20.5252 | 28800 | 0.0931 | 0.2025 |
| 1.0642 | 20.8103 | 29200 | 0.0940 | 0.2045 |
| 1.0922 | 21.0948 | 29600 | 0.0935 | 0.2011 |
| 1.0885 | 21.3798 | 30000 | 0.0929 | 0.2010 |
| 1.107 | 21.6649 | 30400 | 0.0930 | 0.1988 |
| 1.0449 | 21.9499 | 30800 | 0.0931 | 0.2001 |
| 1.033 | 22.2345 | 31200 | 0.0931 | 0.2048 |
| 1.057 | 22.5195 | 31600 | 0.0932 | 0.1988 |
| 1.0248 | 22.8046 | 32000 | 0.0929 | 0.2019 |
| 0.9784 | 23.0891 | 32400 | 0.0927 | 0.1951 |
| 1.0443 | 23.3741 | 32800 | 0.0927 | 0.1995 |
| 0.9972 | 23.6592 | 33200 | 0.0923 | 0.1995 |
| 1.0527 | 23.9442 | 33600 | 0.0930 | 0.1964 |
| 0.9927 | 24.2288 | 34000 | 0.0927 | 0.1979 |
| 0.9504 | 24.5138 | 34400 | 0.0927 | 0.1960 |
| 1.0567 | 24.7989 | 34800 | 0.0925 | 0.1986 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sarthak1/codemalt | sarthak1 | 2025-05-31T12:22:48Z | 39 | 1 | distiller | [
"distiller",
"safetensors",
"model2vec",
"code-search",
"code-embeddings",
"distillation",
"sentence-transformers",
"static-embeddings",
"tokenlearn",
"feature-extraction",
"code",
"dataset:code_search_net",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2025-05-25T19:24:32Z | ---
base_model: sentence-transformers/all-mpnet-base-v2
library_name: distiller
license: apache-2.0
license_name: apache-2.0
license_link: LICENSE
model_name: codemalt-base-8m
tags:
- code-search
- code-embeddings
- model2vec
- distillation
- sentence-transformers
- static-embeddings
- tokenlearn
datasets:
- code_search_net
metrics:
- ndcg@10
- mrr
- recall@5
language:
- code
pipeline_tag: feature-extraction
---
# CodeMalt-Base-8M
**CodeMalt-Base-8M** is a high-performance, code-specialized static embedding model created through Model2Vec distillation of `sentence-transformers/all-mpnet-base-v2`. This model achieves **73.87% NDCG@10** on CodeSearchNet benchmarks while being **14x smaller** and **15,021x faster** than the original teacher model.
## 🏆 Performance Highlights
- **NDCG@10**: 0.7387 (Best among all distilled models)
- **Mean Reciprocal Rank (MRR)**: 0.7010
- **Recall@5**: 0.8017
- **Model Size**: 7.6M parameters (vs 109M original)
- **Inference Speed**: 15,021x faster than teacher model
- **Memory Usage**: <1GB RAM (vs 8+ GB VRAM for original)
## 📊 CodeSearchNet Performance by Language
| Language | NDCG@10 | MRR | Recall@5 |
|----------|---------|-----|----------|
| **Python** | 0.7899 | 0.7501 | 0.8421 |
| **JavaScript** | 0.7234 | 0.6801 | 0.7895 |
| **Java** | 0.7456 | 0.7089 | 0.8123 |
| **PHP** | 0.7198 | 0.6856 | 0.7834 |
| **Ruby** | 0.7312 | 0.6934 | 0.7912 |
| **Go** | 0.7223 | 0.6876 | 0.7913 |
## 🔧 Model Details
- **Teacher Model**: [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Distillation Method**: Model2Vec + Tokenlearn training on CodeSearchNet
- **Architecture**: Static embeddings (no neural network inference required)
- **Embedding Dimensions**: 256
- **Training Data**: CodeSearchNet code-comment pairs across 6 programming languages
- **Optimization**: PCA dimensionality reduction + SIF weighting + Zipf regularization
- **Vocabulary Size**: 29,528
- **Parameters**: 7.6M
- **Size**: 14.4MB
## 🎯 Distiller: Code-Specialized Embedding Toolkit
**Distiller** is an independent toolkit built upon [Model2Vec](https://github.com/MinishLab/model2vec) and [Tokenlearn](https://github.com/MinishLab/tokenlearn) for creating code-specialized static embeddings. This package provides a complete pipeline for distilling, training, and evaluating efficient embedding models optimized for code-related tasks.
> **Note**: This is an independent research project that builds upon the Model2Vec framework. We are not affiliated with the MinishLab Model2Vec team, but acknowledge their excellent foundational work.
>[!Important]
>Check out the comprehensive [REPORT.md](REPORT.md) file generated by this toolkit for detailed performance analysis, model comparisons, and evaluation results across different programming languages.
The **distiller** package provides a complete pipeline for:
1. **Distilling code-specialized embeddings** from large sentence transformer models using Model2Vec
2. **Comprehensive evaluation** on CodeSearchNet benchmarks across 6 programming languages
3. **Performance benchmarking** (speed, memory, model size analysis)
4. **Advanced training** with tokenlearn for enhanced code understanding
5. **Analysis and reporting** with visualizations and comparison charts
6. **Cloud-scale processing** with Beam support for distributed execution
### Key Benefits
- **🚀 Performance**: Up to 500x faster inference with 50x smaller models
- **📊 Code-Optimized**: Specialized for code search, classification, and similarity tasks
- **🔬 Comprehensive**: Full evaluation pipeline with CodeSearchNet metrics
- **☁️ Scalable**: Local and cloud execution with Beam support
- **📈 Analytical**: Rich reporting with performance charts and comparisons
## 🚀 Quick Start
### Installation
```bash
# Install with all dependencies
pip install model2vec[train] torch transformers datasets sentence-transformers
pip install typer pydantic plotly matplotlib seaborn
# Install the distiller package (assuming local development)
pip install -e .
```
### Basic Usage
```bash
# Simple distillation of a teacher model
distiller distill
# Distillation with advanced CodeSearchNet training
distiller distill --train
# Evaluate distilled models on CodeSearchNet
distiller evaluate
# Generate comprehensive analysis report
distiller analyze
```
### Python API
```python
from distiller import distill, evaluate, analyze
# Distill a specific model
results = distill.run_local_distillation(
teacher_models=["microsoft/codebert-base"],
enable_training=True, # Include CodeSearchNet fine-tuning
pca_dims=256
)
# Evaluate on CodeSearchNet
evaluation_results = evaluate.run_evaluation(
models=["./code_model2vec/final/codemalt-base-8m"],
max_queries=1000,
languages=["python", "javascript", "java", "go", "php", "ruby"]
)
# Generate analysis report
analyze.main(
results_dir="./code_model2vec/evaluation_results",
model_name="code_model2vec_distilled_models",
output="ANALYSIS_REPORT.md"
)
```
## 📋 Features
### 🔬 Distillation Engine
- **Multiple Teacher Models**: Support for 15+ pre-configured teacher models including:
- Code-specialized: `microsoft/codebert-base`, `BAAI/bge-code-v1`, `Salesforce/SFR-Embedding-Code-2B_R`
- General-purpose: `sentence-transformers/all-mpnet-base-v2`, `BAAI/bge-m3`
- Instruction-tuned: `Alibaba-NLP/gte-Qwen2-1.5B-instruct`
- **CodeMalt Model Series**: Our flagship models follow the naming convention `codemalt-base-[N]m` where `[N]m` indicates millions of parameters (e.g., `codemalt-base-8m` has ~7.6 million parameters)
- **Advanced Training Pipeline**: Optional tokenlearn-based training following the POTION approach:
1. Model2Vec distillation (basic static embeddings)
2. Feature extraction using sentence transformers
3. Tokenlearn training on CodeSearchNet data
4. Post-training re-regularization (PCA + SIF weighting)
- **Robust Model Handling**: Automatic compatibility checks and specialized handling for problematic models
### 📊 Evaluation Framework
- **CodeSearchNet Evaluation**: Standard code search benchmarks across 6 programming languages
- **Retrieval Metrics**: NDCG@k, MRR, Recall@k, Mean/Median Rank
- **Performance Benchmarking**:
- Model size analysis (disk usage, parameters, memory footprint)
- Inference speed testing (various batch sizes and text lengths)
- CPU vs GPU performance comparison
- Memory scaling analysis
### 📈 Analysis & Reporting
- **Comprehensive Reports**: Automated generation of analysis reports with:
- Performance comparison tables
- Language-specific radar charts
- Efficiency analysis (performance vs model size)
- Peer model comparisons
- **Rich Visualizations**: Plotly and Matplotlib charts including:
- Multi-model performance heatmaps
- Batch size scaling curves
- Memory usage patterns
- Model efficiency scatter plots
### ☁️ Cloud Integration
- **Beam Support**: Distributed execution on Beam cloud infrastructure
- **Volume Management**: Persistent storage with checkpoint support
- **Resource Optimization**: GPU-optimized configurations (A100-40G default)
- **Automatic Syncing**: Seamless model and result synchronization
## 🛠️ CLI Reference
### `distiller distill`
Distill teacher models into efficient static embeddings.
```bash
distiller distill [OPTIONS]
Options:
--use-beam Use Beam cloud for distillation
--train Enable advanced training (CodeSearchNet fine-tuning)
--teacher-models TEXT Specific teacher models to distill (can be repeated)
--pca-dims INTEGER PCA dimensions (default: 256)
--clear-cache Clear HuggingFace cache for problematic models
```
**Examples:**
```bash
# Basic distillation of all default models
distiller distill
# Train specific models with advanced CodeSearchNet fine-tuning
distiller distill --train --teacher-models microsoft/codebert-base --teacher-models BAAI/bge-code-v1
# Use Beam cloud with custom PCA dimensions
distiller distill --use-beam --train --pca-dims 512
```
### `distiller evaluate`
Evaluate models on CodeSearchNet benchmarks with performance analysis.
```bash
distiller evaluate [OPTIONS]
Options:
--use-beam Use Beam cloud for evaluation
--skip-third-party Skip third-party models evaluation
--skip-benchmark Skip performance benchmarking
--max-queries INTEGER Maximum queries per language (default: 100)
```
**Examples:**
```bash
# Comprehensive evaluation with benchmarking
distiller evaluate --max-queries 1000
# Quick evaluation without performance benchmarks
distiller evaluate --skip-benchmark --max-queries 100
# Cloud-based evaluation
distiller evaluate --use-beam --max-queries 500
```
### `distiller analyze`
Generate comprehensive analysis reports with visualizations.
```bash
distiller analyze [OPTIONS]
Options:
--results-dir PATH Results directory (default: code_model2vec/evaluation_results)
--model-name TEXT Model name for analysis (default: gte_qwen2_m2v_code (Ours))
--output PATH Output report file (default: REPORT.md)
--export-csv PATH Export results to CSV file
```
**Examples:**
```bash
# Generate standard analysis report
distiller analyze
# Custom analysis with CSV export
distiller analyze --model-name "my_distilled_model" --output custom_report.md --export-csv results.csv
# Analyze specific results directory
distiller analyze --results-dir ./custom_results --output analysis.md
```
## 📁 Directory Structure
The distiller uses a standardized directory structure:
```
code_model2vec/
├── base/ # Basic distilled models (Step 1)
│ └── code_model2vec_{teacher_name}/
├── final/ # Final models (copied from base or after training)
│ └── code_model2vec_{teacher_name}[_fine_tuned]/
├── evaluation_results/ # CodeSearchNet evaluation results
│ └── comprehensive_eval_{model}.json
├── benchmark_results/ # Performance benchmark results
├── analysis_results/ # Analysis reports and charts
│ └── charts/
├── checkpoints/ # Training checkpoints
└── cache/ # Temporary cache files
```
## ⚙️ Configuration
### Teacher Models
Default supported teacher models (configured in `config.py`):
```python
TEACHER_MODELS = [
"Alibaba-NLP/gte-Qwen2-1.5B-instruct", # Instruction-tuned
"BAAI/bge-m3", # Multilingual
"jinaai/jina-embeddings-v3", # Modern architecture
"microsoft/codebert-base", # Code-specialized
"microsoft/graphcodebert-base", # Graph-aware code
"sentence-transformers/all-mpnet-base-v2", # General-purpose
# ... and more
]
```
### Distillation Parameters
```python
# Model2Vec distillation settings
optimal_pca_dims: int = 256
sif_coefficient: float = 1e-3
apply_zipf: bool = True
# Tokenlearn training settings (when --train is enabled)
tokenlearn_dataset: str = "sentence-transformers/codesearchnet"
tokenlearn_text_key: str = "code" # Use code field for training
```
### Evaluation Settings
```python
# CodeSearchNet evaluation
evaluation_languages = ["python", "java", "javascript", "php", "ruby", "go"]
max_queries_per_language: int = 1000
evaluation_metrics = ["ndcg@1", "ndcg@5", "ndcg@10", "mrr", "recall@1", "recall@5", "recall@10"]
```
## 📄 License
This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
This independent research project builds upon several excellent open-source foundations:
- [Model2Vec](https://github.com/MinishLab/model2vec) by MinishLab - Core static embedding distillation framework
- [Tokenlearn](https://github.com/MinishLab/tokenlearn) by MinishLab - Advanced token-level training methodology
- [CodeSearchNet](https://github.com/github/CodeSearchNet) by GitHub - Code search benchmark dataset and evaluation framework
- [Sentence Transformers](https://github.com/UKPLab/sentence-transformers) by UKP Lab - Teacher model ecosystem and training framework
- [Beam](https://beam.cloud) - Distributed cloud computing infrastructure
- [Transformers](https://github.com/huggingface/transformers) by Hugging Face - Model loading and tokenization utilities
**Note**: While this toolkit leverages Model2Vec and Tokenlearn, it is an independent research contribution and is not officially associated with or endorsed by the MinishLab team.
|
dsfghk76/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-vicious_scavenging_grasshopper | dsfghk76 | 2025-05-31T12:22:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am vicious scavenging grasshopper",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T00:34:53Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-vicious_scavenging_grasshopper
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am vicious scavenging grasshopper
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-vicious_scavenging_grasshopper
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dsfghk76/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-vicious_scavenging_grasshopper", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits