modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
cicyvabraham/openai_whisper-small_llm_lingo_colab3_finetuned | cicyvabraham | 2025-04-28T11:10:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T12:01:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
StevenMD/llama1.1 | StevenMD | 2025-04-28T11:05:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T16:09:06Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zrj619/TEN_Turn_Detector | zrj619 | 2025-04-28T11:03:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-28T10:59:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
arcee-samsung/Sky-T1-32B-Flash-Spectrum | arcee-samsung | 2025-04-28T11:02:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"arxiv:2401.08417",
"base_model:NovaSky-AI/Sky-T1-32B-Preview",
"base_model:finetune:NovaSky-AI/Sky-T1-32B-Preview",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T10:52:32Z | ---
base_model: NovaSky-AI/Sky-T1-32B-Preview
library_name: transformers
model_name: outputs/simpo-skyT1-out
tags:
- generated_from_trainer
licence: license
---
# Model Card for outputs/simpo-skyT1-out
This model is a fine-tuned version of [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417).
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite CPO as:
```bibtex
@inproceedings{xu2024contrastive,
title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}},
author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
year = 2024,
booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=51iwkioZpn}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
fedovtt/af8db0d4-f88d-4988-a478-abc570387694 | fedovtt | 2025-04-28T10:58:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-2b",
"base_model:adapter:unsloth/codegemma-2b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T10:33:02Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: af8db0d4-f88d-4988-a478-abc570387694
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-2b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ba0621f537bc8cd4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ba0621f537bc8cd4_train_data.json
type:
field_input: system
field_instruction: question
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: fedovtt/af8db0d4-f88d-4988-a478-abc570387694
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/ba0621f537bc8cd4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 37395544-fb64-4439-9c6c-16a1de7f207f
wandb_project: s56-1
wandb_run: your_name
wandb_runid: 37395544-fb64-4439-9c6c-16a1de7f207f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# af8db0d4-f88d-4988-a478-abc570387694
This model is a fine-tuned version of [unsloth/codegemma-2b](https://huggingface.co/unsloth/codegemma-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8682 | 0.0088 | 200 | 0.9048 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
secmlr/SWE-BENCH-5k-first-2000-claude-search-replace-generation_qwen_code_14b_5k_first_2000_generation | secmlr | 2025-04-28T10:51:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T06:54:59Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-14B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: SWE-BENCH-5k-first-2000-claude-search-replace-generation_qwen_code_14b_5k_first_2000_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SWE-BENCH-5k-first-2000-claude-search-replace-generation_qwen_code_14b_5k_first_2000_generation
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) on the SWE-BENCH-5k-first-2000-claude-search-replace-generation dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 12
- total_train_batch_size: 48
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 2.20.0
- Tokenizers 0.20.3
|
Triangle104/Athena-3.5-3B-Q4_K_S-GGUF | Triangle104 | 2025-04-28T10:49:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"base_model:Spestly/Athena-3.5-3B",
"base_model:quantized:Spestly/Athena-3.5-3B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T10:49:04Z | ---
base_model: Spestly/Athena-3.5-3B
library_name: transformers
tags:
- unsloth
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# Triangle104/Athena-3.5-3B-Q4_K_S-GGUF
This model was converted to GGUF format from [`Spestly/Athena-3.5-3B`](https://huggingface.co/Spestly/Athena-3.5-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Spestly/Athena-3.5-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Athena-3.5-3B-Q4_K_S-GGUF --hf-file athena-3.5-3b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Athena-3.5-3B-Q4_K_S-GGUF --hf-file athena-3.5-3b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Athena-3.5-3B-Q4_K_S-GGUF --hf-file athena-3.5-3b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Athena-3.5-3B-Q4_K_S-GGUF --hf-file athena-3.5-3b-q4_k_s.gguf -c 2048
```
|
KingNish/Qwen2.5-0.5b-RBase | KingNish | 2025-04-28T10:45:09Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-02T16:25:21Z | ---
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Qwen 2.5 0.5B Model
## Model Description
This model is a compact yet powerful language model trained to answer a variety of questions with impressive quality. Despite its smaller size, it has demonstrated performance comparable to Llama 3.2 1B, and in some cases, it even outperforms it. This model was specifically trained on a 12,800 rows of the Magpie 300k Dataset.
## Performance
The Qwen 2.5 model has shown promising results in various tests, including the "strawberry test, Decimal Comparison test" where it successfully provided accurate answers. However, it is important to note that, like many models of its size, it may occasionally produce incorrect answers or flawed reasoning. Continuous improvements and full training are planned to enhance its performance further.
## How to Use
To use the Qwen 2.5 model, you can load it using the Hugging Face Transformers library. Here’s a simple example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "KingNish/Qwen2.5-0.5b-Test-ft"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Which is greater 9.9 or 9.11 ??"
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Future Work
I am actively working on improving the Qwen 2.5 model by training it on a larger dataset.
# Uploaded model
- **Developed by:** KingNish
- **License:** apache-2.0
- **Finetuned from model :** Qwen/Qwen2.5-0.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
ArliAI/QwQ-32B-ArliAI-RpR-v3-GGUF | ArliAI | 2025-04-28T10:43:10Z | 240 | 1 | null | [
"gguf",
"en",
"base_model:ArliAI/QwQ-32B-ArliAI-RpR-v3",
"base_model:quantized:ArliAI/QwQ-32B-ArliAI-RpR-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T13:15:21Z | ---
license: apache-2.0
thumbnail: "https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/coilCTGeL0OUYr9PA9zna.jpeg"
language:
- en
base_model:
- ArliAI/QwQ-32B-ArliAI-RpR-v3
---
# QwQ-32B-ArliAI-RpR-v3
<img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/coilCTGeL0OUYr9PA9zna.jpeg" alt="clickbait" width="500">
<small>Image generated using Arli AI Image Generation https://www.arliai.com/image-generation</small>
## RpR v3 Changes:
- The best model from ArliAI yet:
Extreme creativity and out of the box thinking.
- No longer use QwQ-abliterated as base:
v3 is a re-do of v2 but without the problems stemming from starting out with a QwQ-lorablated base. This turned out to not be a good move as it clearly lobotomizes the model more and was even visible from higher training and eval loss values.
- Fixed dissasociated thoughts:
A lot of effort have been made to completely re-run the RpR dataset generation in order to make sure the generated thinking tokens now always match what the model responses are.
- Fixed random refusals:
The previous RpR v1 dataset was generated with vanilla QwQ which caused some refusals in both the thinking and response examples, with RpR v3 the dataset generation is now done using QwQ-abliterated which prevents any refusals from coming through.
- Fixed nonsense words found in dataset:
There were a bunch of presumably censoring attempts found on the open datasets used for the RPMax/RpR datasets and these misplaced words/phrases has now been fixed to prevent the model from copying this behavior.
- Rex scheduler:
v3 is trained using the newer and better Rex scheduler instead of the regular cosine scheduler in order to improve the model learning nuances from more of the dataset as this scheduler keeps the learning rate higher for longer.
## RpR Series Overview: Building on RPMax with Reasoning
RpR (RolePlay with Reasoning) is a new series of models from ArliAI. This series **builds directly upon the successful dataset curation methodology and training methods developed for the RPMax series**.
RpR models use the same curated, deduplicated RP and creative writing dataset used for RPMax, with a focus on variety to ensure high creativity and minimize cross-context repetition. Users familiar with RPMax will recognize the unique, non-repetitive writing style unlike other finetuned-for-RP models.
With the release of QwQ as the first high performing open-source reasoning model that can be easily trained, it was clear that the available instruct and creative writing reasoning datasets contains only one response per example. This is type of single response dataset used for training reasoning models causes degraded output quality in long multi-turn chats. Which is why Arli AI decided to create a real RP model capable of long multi-turn chat with reasoning.
In order to create RpR, we first had to actually create the reasoning RP dataset by re-processing our existing known-good RPMax dataset into a reasoning dataset. This was possible by using the base QwQ Instruct model itself to create the reasoning process for every turn in the RPMax dataset conversation examples, which is then further refined in order to make sure the reasoning is in-line with the actual response examples from the dataset.
Another important thing to get right is to make sure the model is trained on examples that present reasoning blocks in the same way as it encounters it during inference. Which is, never seeing the reasoning blocks in it's context. In order to do this, the training run was completed using axolotl with manual template-free segments dataset in order to make sure that the model is never trained to see the reasoning block in the context. Just like how the model will be used during inference time.
The result of training QwQ on this dataset with this method are consistently coherent and interesting outputs even in long multi-turn RP chats. This is as far as we know the first true correctly-trained reasoning model trained for RP and creative writing.
You can access the model at https://arliai.com and we also have a models ranking page at https://www.arliai.com/models-ranking
Ask questions in our new Discord Server https://discord.com/invite/t75KbPgwhk or on our subreddit https://www.reddit.com/r/ArliAI/
## Model Description
QwQ-32B-ArliAI-RpR-v3 is the third release in the RpR series. It is a 32-billion parameter model fine-tuned using the RpR dataset based on the curated RPMax dataset combined with techniques to maintain reasoning abilities in long multi-turn chats.
### Specs
* **Base Model**: QwQ-32B
* **Max Context Length**: Max 128K (Realistically 32K)
* **Parameters**: 32B
* **Reasoning Model**: Yes
### Training Details
* **Sequence Length**: 8192
* **Epochs**: 1 epoch training (Inherited from RPMax methods)
* **Fine-tuning Method**: RS-QLORA+ (Rank-Stabilized LoRA + LoRA Plus 8x)
* **Rank/Alpha**: 128-rank 128-alpha
* **Learning Rate**: 0.00001
* **Scheduler**: Rex
* **Gradient accumulation**: 32
### Very Nice Training graphs :)
<img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/gBGmhMB0kgoJTmxs-fvtk.png" alt="Train Loss" width="600">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/DtdMtuoA4bX8mKmxOSY10.png" alt="Eval Loss" width="600">
### Quantization
* **BF16**: https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v3
* **GGUF**: https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v3-GGUF
### How to use reasoning models correctly in ST
<img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/njVt2Vir8Isd3ApjTBmoI.png" alt="RpR ST Settings" width="600">
For any reasoning models in general, you need to make sure to set:
* Prefix is set to ONLY \<think> and the suffix is set to ONLY \</think> without any spaces or newlines (enter)
* Reply starts with \<think>
* Always add character names is unchecked
* Include names is set to never
* As always the chat template should also conform to the model being used
Note: Reasoning models work properly only if include names is set to never, since they always expect the eos token of the user turn followed by the \<think> token in order to start reasoning before outputting their response. If you set include names to enabled, then it will always append the character name at the end like "Seraphina:\<eos_token>" which confuses the model on whether it should respond or reason first.
The rest of your sampler parameters can be set as you wish as usual.
If you don't see the reasoning wrapped inside the thinking block, then either your settings is still wrong and doesn't follow my example or that your ST version is too old without reasoning block auto parsing.
If you see the whole response is in the reasoning block, then your \<think> and \</think> reasoning token suffix and prefix might have an extra space or newline. Or the model just isn't a reasoning model that is smart enough to always put reasoning in between those tokens.
### If you set everything up correctly, it should look like this:
<img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/IDs6FooZgVTIBNHFHZUZB.png" alt="RpR example response" width="600">
---
<details>
<summary>Details: The RPMax Foundation (Dataset & Training Philosophy)</summary>
*The following sections detail the core philosophy behind the dataset and training methodology originally developed for RPMax, which serves as the foundation for the RpR series.*
### The Goal: Reduced Repetition and Higher Creativity
The goal of the dataset curation used for both RPMax and RpR is to reduce repetitions and increase the models ability to creatively write in different situations presented to it. What this means is it is a model that will output responses very differently without falling into predictable tropes across different situations.
### What is repetition and creativity?
First of all, creativity should mean the variety in output that the model is capable of creating. You should not confuse creativity with writing prose. When a model writes in a way that can be said to be pleasant like writers would write in a novel, this is not creative writing. This is just a model having a certain pleasant type of writing prose. So a model that writes nicely is not necessarily a creative model.
Repetition and creativity are essentially intertwined with each other, so if a model is repetitive then a model can also be said to be un-creative as it cannot write new things and can only repeat similar responses that it has created before. For repetition there are actually two very different forms of repetition.
**In-context repetition:** When people mention a model is repetitive, this usually mean a model that likes to repeat the same phrases in a single conversation. An example of this is when a model says that a character "flicks her hair and...." and then starts to prepend that "flicks her hair and..." into every other action that character does.
It can be said that the model is boring, but even in real people's writing it is possible that this kind of repetition could be intentional to subtly prove a point or showcase a character's traits in some scenarios. So this type of repetition is not always bad and completely discouraging a model from doing this does not always lead to improve a model's writing ability.
In this regard, RPMax and RpR is not yet focused on eliminating this type of repetition so there might be some in-context repetition that can be seen in the outputs. Eliminating this will be the next big step of the RPMax and RpR series of models.
**Cross-context repetition:** A second worse type of repetition is a model's tendency to repeat the same phrases or tropes in very different situations. An example is a model that likes to repeat the infamous "shivers down my spine" phrase in wildly different conversations that don't necessarily fit with that phrase.
This type of repetition is ALWAYS bad as it is a sign that the model has over-fitted into that style of "creative writing" that it has often seen in the training dataset. A model's tendency to have cross-context repetition is also usually visible in how a model likes to choose similar repetitive names when writing stories. Such as the infamous "elara" and "whispering woods" names.
The primary goal of the dataset curation for RPMax and RpR is to create a highly creative model by reducing cross-context repetition, as that is the type of repetition that follows you through different conversations. This is combated by making sure the dataset does not have repetitions of the same situations or characters in different example entries.
### Dataset Curation
The success of models trained on this dataset (including RPMax and now RpR) is thanks to the training method and the unique dataset created for fine-tuning. It contains as many open source creative writing and RP datasets that can be found (all from Hugging Face), from which have been curated to weed out datasets that are purely synthetic generations as they often only serve to dumb down the model and make the model learn GPT-isms (slop) rather than help.
Then Llama 3.1 8B (or a similarly capable model) is used to create a database of the characters and situations that are portrayed in these datasets, which is then used to de-dupe these datasets to make sure that there is only a single entry of any character or situation.
### The Golden Rule of Fine-Tuning
Unlike the initial pre-training stage where the more data you throw at it the better it becomes for the most part, the golden rule for fine-tuning models isn't quantity, but instead quality over quantity. So the dataset used here is actually orders of magnitude smaller than it would be if it included repeated characters and situations in the dataset, but the end result is a model that does not feel like just another "in-breed" of another creative writing/RP model.
### Training Parameters and Unconventional Approach
The usual way is to have a low learning rate and high gradient accumulation for better loss stability, and then run multiple epochs of the training run until the loss is acceptable.
The RPMax and RpR methodology, however, uses only **one single epoch**, a low gradient accumulation, and a higher than normal learning rate. The loss curve during training is actually unstable and jumps up and down a lot, but if it is smoothed out, it is steadily decreasing over time. The theory is that this allows the models to learn from each individual example in the dataset much more, and by not showing the model the same example twice using multiple epochs, it stops the model from latching on and reinforcing a single character or story trope.
The jumping up and down of loss during training is because as the model gets trained on a new entry from the dataset, the model will have never seen a similar example before and therefore can't really predict an answer similar to the example entry. While the relatively high end loss of 1.0 or slightly above is actually acceptable because the goal was never to create a model that can output exactly like the dataset that is being used to train it. Rather to create a model that is creative enough to make up it's own style of responses.
This is different from training a model in a particular domain and needing the model to reliably be able to output like the example dataset, such as when training a model on a company's internal knowledge base.
</details>
---
## Try It Out!
Model preference is subjective, so please do try QwQ-32B-ArliAI-RpR-v3 for yourself. Your feedback both good and bad is always valueable and will help us improve the future RPMax and RpR models. |
mradermacher/lwd-Mirau-RP-14B-GGUF | mradermacher | 2025-04-28T10:42:01Z | 191 | 1 | transformers | [
"transformers",
"gguf",
"qwen",
"qwen2",
"rp",
"roleplay",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:AetherArchitectural/lwd-Mirau-RP-14B",
"base_model:quantized:AetherArchitectural/lwd-Mirau-RP-14B",
"license:unknown",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-09T04:54:53Z | ---
base_model: AetherArchitectural/lwd-Mirau-RP-14B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: unknown
quantized_by: mradermacher
tags:
- qwen
- qwen2
- rp
- roleplay
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AetherArchitectural/lwd-Mirau-RP-14B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/lwd-Mirau-RP-14B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lwd-Mirau-RP-14B-GGUF/resolve/main/lwd-Mirau-RP-14B.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/lwd-Mirau-RP-14B-GGUF/resolve/main/lwd-Mirau-RP-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/lwd-Mirau-RP-14B-GGUF/resolve/main/lwd-Mirau-RP-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lwd-Mirau-RP-14B-GGUF/resolve/main/lwd-Mirau-RP-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/lwd-Mirau-RP-14B-GGUF/resolve/main/lwd-Mirau-RP-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/lwd-Mirau-RP-14B-GGUF/resolve/main/lwd-Mirau-RP-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lwd-Mirau-RP-14B-GGUF/resolve/main/lwd-Mirau-RP-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lwd-Mirau-RP-14B-GGUF/resolve/main/lwd-Mirau-RP-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/lwd-Mirau-RP-14B-GGUF/resolve/main/lwd-Mirau-RP-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/lwd-Mirau-RP-14B-GGUF/resolve/main/lwd-Mirau-RP-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/lwd-Mirau-RP-14B-GGUF/resolve/main/lwd-Mirau-RP-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
osmr/stable-diffusion-v1-5-lora-db-ghibli | osmr | 2025-04-28T10:41:47Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:mit",
"region:us"
] | text-to-image | 2025-04-28T10:41:19Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
GBL, a man and a woman sitting at a table with glasses of wine in front of
them
parameters:
negative_prompt: >-
low quality, deformed, ugly, bad art, poorly drawn, bad anatomy, low
detail, unrealistic
output:
url: images/test1_gs75_ls10.jpg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: GBL
license: mit
---
# stable-diffusion-v1-5-lora-db-ghibli
<Gallery />
## Model description
diffusers(0.33.1)/train_dreambooth_lora.py
--pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5"
--dataset_name="osmr/ghibli"
--caption_column="prompt"
--resolution=512
--center_crop
--train_batch_size=8
--gradient_accumulation_steps=1
--num_train_epochs=12
--learning_rate=9e-5
--lr_scheduler=cosine
--lr_warmup_steps=32
--rank=16
--train_text_encoder
--seed=264893
--instance_data_dir="ds_images"
--validation_epochs=1
--validation_prompt="GBL, a man and a woman sitting at a table with glasses of wine in front of them"
## Trigger words
You should use `GBL` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/osmr/stable-diffusion-v1-5-lora-db-ghibli/tree/main) them in the Files & versions tab.
|
kavlab/qwen2.5-1.5b-text-to-sql | kavlab | 2025-04-28T10:36:40Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T07:08:49Z | ---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: qwen2.5-1.5b-text-to-sql
tags:
- generated_from_trainer
- trl
- sft
licence: license
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for qwen2.5-1.5b-text-to-sql
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kavlab/qwen2.5-1.5b-text-to-sql", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Vapor_v2_7B-GGUF | mradermacher | 2025-04-28T10:34:44Z | 156 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:FourOhFour/Vapor_v2_7B",
"base_model:quantized:FourOhFour/Vapor_v2_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-13T10:26:18Z | ---
base_model: FourOhFour/Vapor_v2_7B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/FourOhFour/Vapor_v2_7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
samia4566/llama2-7B-finetuned-chat-travel | samia4566 | 2025-04-28T10:34:08Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:ahxt/llama2_xs_460M_experimental",
"base_model:finetune:ahxt/llama2_xs_460M_experimental",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T10:34:01Z | ---
base_model: ahxt/llama2_xs_460M_experimental
library_name: transformers
model_name: llama2-7B-finetuned-chat-travel
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama2-7B-finetuned-chat-travel
This model is a fine-tuned version of [ahxt/llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="samia4566/llama2-7B-finetuned-chat-travel", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gassouma-samia-test/huggingface/runs/um60xfst)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MaestrAI/dr__elara_myles-lora-1745835836 | MaestrAI | 2025-04-28T10:31:07Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-28T10:23:55Z | # dr__elara_myles LORA Model
This is a LORA model for character Dr. Elara Myles
Created at 2025-04-28 12:23:56
|
mradermacher/DATA-AI_Chat_3_1.5B-GGUF | mradermacher | 2025-04-28T10:31:07Z | 178 | 1 | transformers | [
"transformers",
"gguf",
"human-feedback",
"qwen2",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Mattimax/DATA-AI_Chat_3_1.5B",
"base_model:quantized:Mattimax/DATA-AI_Chat_3_1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-15T07:09:28Z | ---
base_model: Mattimax/DATA-AI_Chat_3_1.5B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- human-feedback
- qwen2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Mattimax/DATA-AI_Chat_3_1.5B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DATA-AI_Chat_3_1.5B-GGUF/resolve/main/DATA-AI_Chat_3_1.5B.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/DATA-AI_Chat_3_1.5B-GGUF/resolve/main/DATA-AI_Chat_3_1.5B.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/DATA-AI_Chat_3_1.5B-GGUF/resolve/main/DATA-AI_Chat_3_1.5B.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DATA-AI_Chat_3_1.5B-GGUF/resolve/main/DATA-AI_Chat_3_1.5B.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DATA-AI_Chat_3_1.5B-GGUF/resolve/main/DATA-AI_Chat_3_1.5B.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DATA-AI_Chat_3_1.5B-GGUF/resolve/main/DATA-AI_Chat_3_1.5B.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DATA-AI_Chat_3_1.5B-GGUF/resolve/main/DATA-AI_Chat_3_1.5B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DATA-AI_Chat_3_1.5B-GGUF/resolve/main/DATA-AI_Chat_3_1.5B.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/DATA-AI_Chat_3_1.5B-GGUF/resolve/main/DATA-AI_Chat_3_1.5B.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/DATA-AI_Chat_3_1.5B-GGUF/resolve/main/DATA-AI_Chat_3_1.5B.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DATA-AI_Chat_3_1.5B-GGUF/resolve/main/DATA-AI_Chat_3_1.5B.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DATA-AI_Chat_3_1.5B-GGUF/resolve/main/DATA-AI_Chat_3_1.5B.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
madajabbar/DeepSeek-R1-Idstar-v1 | madajabbar | 2025-04-28T10:30:33Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T10:25:32Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** madajabbar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gSIBEQ3J5AcQT/hjadGs | gSIBEQ3J5AcQT | 2025-04-28T10:26:00Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T10:25:57Z | ---
license: apache-2.0
---
|
performanceoptician/Qwen2.5-7B-Instruct-IQ3_XXS-GGUF | performanceoptician | 2025-04-28T10:24:17Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2024-11-25T13:52:28Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- chat
- llama-cpp
- gguf-my-repo
library_name: transformers
---
# performanceoptician/Qwen2.5-7B-Instruct-IQ3_XXS-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo performanceoptician/Qwen2.5-7B-Instruct-IQ3_XXS-GGUF --hf-file qwen2.5-7b-instruct-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo performanceoptician/Qwen2.5-7B-Instruct-IQ3_XXS-GGUF --hf-file qwen2.5-7b-instruct-iq3_xxs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo performanceoptician/Qwen2.5-7B-Instruct-IQ3_XXS-GGUF --hf-file qwen2.5-7b-instruct-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo performanceoptician/Qwen2.5-7B-Instruct-IQ3_XXS-GGUF --hf-file qwen2.5-7b-instruct-iq3_xxs-imat.gguf -c 2048
```
|
VhRu7Vaq1lh/dhakda | VhRu7Vaq1lh | 2025-04-28T10:21:14Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T10:21:14Z | ---
license: apache-2.0
---
|
quynh1421/results | quynh1421 | 2025-04-28T10:20:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-28T10:18:50Z | ---
library_name: transformers
license: agpl-3.0
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2846
- Accuracy: 0.8930
- F1: 0.8925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5435 | 1.0 | 125 | 0.3400 | 0.8588 | 0.8582 |
| 0.3945 | 2.0 | 250 | 0.2906 | 0.8824 | 0.8822 |
| 0.3423 | 3.0 | 375 | 0.2776 | 0.8945 | 0.8941 |
| 0.3195 | 4.0 | 500 | 0.2832 | 0.8925 | 0.8920 |
| 0.2966 | 5.0 | 625 | 0.2846 | 0.8930 | 0.8925 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
10Prem09/finetuned_Qwen2.5_Coder_0.5B_Instruct | 10Prem09 | 2025-04-28T10:19:02Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"code",
"text2text-generation",
"base_model:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | text2text-generation | 2025-04-28T06:27:47Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-Coder-0.5B-Instruct
pipeline_tag: text2text-generation
tags:
- code
---
# 🧠 Fine-Tuned Qwen 2.5 Coder: Python Data Engineering Assistant
## 📌 Model Overview
This model is a fine-tuned version of **Qwen2.5-Coder-0.5B-Instruct**, adapted specifically to write clean, structured Python code for **data engineering and data transformation tasks**. It is especially effective for single-step operations such as joining datasets, handling quarters, replacing null values, and returning structured output.
Fine-tuned by **S Prem Kaushik**, this model is optimized for precision, clean code generation, and adherence to Pythonic best practices.
---
## 🎯 Objective
This model consistently follows best practices in data transformation, including:
- ✅ **Column Collision Handling**: Automatically applies `remove_column_collisions()` after joins.
- 📅 **Quarter & Date Handling**: Uses fiscal quarter mapping from a configurable dictionary.
- 🧼 **NaT/NaN Replacement**: Replaces `NaT` and `Nan` with Python `None`.
- 📦 **Function-Scoped Imports**: All `import` statements are inside functions.
- 📤 **Input/Output Structure**: Returns results as structured dictionaries using `.to_dict('records')`.
---
## 🧪 Training Data
- **Format**: JSONL with `system`, `query`, and `response` entries
- **Domain**: Realistic one-step data manipulation tasks in pandas
- **Coverage**: Merging, joining, null replacement, quarter logic, grouping, etc.
- **Author**: S Prem Kaushik
---
## 🛠 Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the model
model = AutoModelForCausalLM.from_pretrained("10Prem09/finetuned_Qwen2.5_Coder_0.5B_Instruct", torch_dtype=torch.float16).cuda()
tokenizer = AutoTokenizer.from_pretrained("10Prem09/finetuned_Qwen2.5_Coder_0.5B_Instruct")
model.eval()
# Define prompt using raw role tags (no chat template)
system_prompt = "Role: Python Code Generator."
user_prompt = "Merge employee and department data, then filter by department to calculate the average salary for each employee."
prompt = f"<|system|>\n{system_prompt}\n<|user|>\n{user_prompt}\n<|assistant|>\n"
# Tokenize input
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
# Generate response
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=300, # Ideally can be varied between 300 and 512.
repetition_penalty=1.3, # Can be changed from 1.2 to 1.4 to get ideal results.
pad_token_id=tokenizer.eos_token_id
)
# Decode and print result
generated_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_code)
```
---
## 📊 Sample Dataset Entries
### Example 1: Campaign ROI by Quarter with Column Collision Handling
```json
{
"system": "Role: Python Code Generator...",
"query": "Analyze marketing campaign ROI by fiscal quarters, handling column collisions with a left join.",
"response": "```python\ndef campaign_roi_by_quarter(campaign_df, spending_df, fiscal_config):\n import pandas as pd\n import numpy as np\n campaign_df['launch_date'] = pd.to_datetime(campaign_df['launch_date'])\n campaign_df['year'] = campaign_df['launch_date'].dt.year\n campaign_df['month'] = campaign_df['launch_date'].dt.month\n\n def assign_quarter(row):\n for q, (start, end) in fiscal_config.items():\n m = row['month']\n if start > end:\n if m >= start or m <= end:\n return f\"Q{q} {row['year'] if m >= start else row['year']-1}\"\n elif start <= m <= end:\n return f\"Q{q} {row['year']}\"\n\n campaign_df['quarter'] = campaign_df.apply(assign_quarter, axis=1)\n merged_df = pd.merge(campaign_df, spending_df, on='campaign_id', how='left', suffixes=('', '_y'))\n merged_df = remove_column_collisions(merged_df, 'left')\n merged_df['roi'] = merged_df['revenue_generated'] / merged_df['spending']\n result = merged_df.groupby('quarter')['roi'].mean().reset_index()\n result.replace({pd.NaT: None, np.nan: None}, inplace=True)\n return result.to_dict('records')\n\nfiscal_config = {1: (7,9), 2: (10,12), 3: (1,3), 4: (4,6)}\nresult = campaign_roi_by_quarter(source1, source2, fiscal_config)\n```"
}
```
### Example 2: Supplier Deliveries by Fiscal Quarter
```json
{
"system": "Role: Python Code Generator...",
"query": "Track supplier deliveries by fiscal quarters: Q1 (March-May), Q2 (June-August), Q3 (September-November), Q4 (December-February).",
"response": "```python\ndef deliveries_by_quarter(deliveries_df, fiscal_config):\n import pandas as pd\n import numpy as np\n deliveries_df['delivery_date'] = pd.to_datetime(deliveries_df['delivery_date'])\n deliveries_df['year'] = deliveries_df['delivery_date'].dt.year\n deliveries_df['month'] = deliveries_df['delivery_date'].dt.month\n\n def assign_quarter(row):\n for q, (start, end) in fiscal_config.items():\n m = row['month']\n if start > end:\n if m >= start or m <= end:\n return f\"Q{q} {row['year'] if m >= start else row['year']-1}\"\n elif start <= m <= end:\n return f\"Q{q} {row['year']}\"\n\n deliveries_df['quarter'] = deliveries_df.apply(assign_quarter, axis=1)\n result = deliveries_df.groupby('quarter').size().reset_index(name='delivery_count')\n result.replace({pd.NaT: None, np.nan: None}, inplace=True)\n return result.to_dict('records')\n\nfiscal_config = {1: (3, 5), 2: (6, 8), 3: (9, 11), 4: (12, 2)}\nresult = deliveries_by_quarter(source1, fiscal_config)\n```"
}
```
### Example 3: Join Project and Employee Tables
```json
{
"system": "Role: Python Code Generator. Generate clean, function-based Python code based on user requests.",
"query": "Find all projects and their associated manager names by joining project and employee data.",
"response": "```python\ndef get_project_manager_info(projects_df, employees_df):\n import pandas as pd\n import numpy as np\n\n merged = projects_df.merge(employees_df, left_on='manager_id', right_on='employee_id', how='left')\n merged = remove_column_collisions(merged, 'left')\n\n result = merged[['project_name', 'manager_name']]\n result.replace({pd.NaT: None, np.nan: None}, inplace=True)\n\n return result.to_dict('records')\n\nresult = get_project_manager_info(source1, source2)\n```"
}
```
---
## 📦 Model Details
- **Base Model**: Qwen2.5-Coder-0.5B-Instruct
- **Fine-Tuned By**: S Prem Kaushik
- **Specialization**: Python data manipulation for ETL, reporting, and time-based aggregation
---
## 🛡️ Limitations
- Designed for single-step transformations; complex pipelines should be modularized.
- Assumes remove_column_collisions() is available in the environment.
- Chat-style prompt formatting is recommended for best results.
## 📬 Contact & Dataset Access
If you are interested in accessing the fine-tuning dataset, reviewing the training code, or exploring potential collaborative opportunities, you are welcome to reach out.
Please contact me via my Hugging Face profile:
🔗 https://huggingface.co/10Prem09
Additional contact links (e.g., GitHub or LinkedIn) are available on my profile page. |
Cpelpa/cpelpa-1.0.5_14b-redtrain | Cpelpa | 2025-04-28T10:18:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T10:16:17Z | ---
base_model: unsloth/deepseek-r1-distill-qwen-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Cpelpa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-14b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
deeponh/mal_9b_2b_D2 | deeponh | 2025-04-28T10:09:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T10:04:49Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
THUDM/GLM-Z1-9B-0414 | THUDM | 2025-04-28T10:07:32Z | 3,456 | 55 | transformers | [
"transformers",
"safetensors",
"glm4",
"text-generation",
"conversational",
"zh",
"en",
"arxiv:2406.12793",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-08T06:39:51Z | ---
license: mit
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
---
# GLM-4-Z1-9B-0414
## Introduction
The GLM family welcomes a new generation of open-source models, the **GLM-4-32B-0414** series, featuring 32 billion parameters. Its performance is comparable to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including a large amount of reasoning-type synthetic data, laying the foundation for subsequent reinforcement learning extensions. In the post-training stage, in addition to human preference alignment for dialogue scenarios, we also enhanced the model's performance in instruction following, engineering code, and function calling using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in areas such as engineering code, Artifact generation, function calling, search-based Q&A, and report generation. Some benchmarks even rival larger models like GPT-4o and DeepSeek-V3-0324 (671B).
**GLM-Z1-32B-0414** is a reasoning model with **deep thinking capabilities**. This was developed based on GLM-4-32B-0414 through cold start and extended reinforcement learning, as well as further training of the model on tasks involving mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During the training process, we also introduced general reinforcement learning based on pairwise ranking feedback, further enhancing the model's general capabilities.
**GLM-Z1-Rumination-32B-0414** is a deep reasoning model with **rumination capabilities** (benchmarked against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model employs longer periods of deep thought to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). The rumination model integrates search tools during its deep thinking process to handle complex tasks and is trained by utilizing multiple rule-based rewards to guide and extend end-to-end reinforcement learning. Z1-Rumination shows significant improvements in research-style writing and complex retrieval tasks.
Finally, **GLM-Z1-9B-0414** is a surprise. We employed the aforementioned series of techniques to train a 9B small-sized model that maintains the open-source tradition. Despite its smaller scale, GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is already at a leading level among open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment.
## Performance
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-32B.png">
</p>
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-9B.png">
</p>
## Model Usage Guidelines
### I. Sampling Parameters
| Parameter | Recommended Value | Description |
| ------------ | ----------------- | -------------------------------------------- |
| temperature | **0.6** | Balances creativity and stability |
| top_p | **0.95** | Cumulative probability threshold for sampling|
| top_k | **40** | Filters out rare tokens while maintaining diversity |
| max_new_tokens | **30000** | Leaves enough tokens for thinking |
### II. Enforced Thinking
- Add \<think\>\n to the **first line**: Ensures the model thinks before responding
- When using `chat_template.jinja`, the prompt is automatically injected to enforce this behavior
### III. Dialogue History Trimming
- Retain only the **final user-visible reply**.
Hidden thinking content should **not** be saved to history to reduce interference—this is already implemented in `chat_template.jinja`
### IV. Handling Long Contexts (YaRN)
- When input length exceeds **8,192 tokens**, consider enabling YaRN (Rope Scaling)
- In supported frameworks, add the following snippet to `config.json`:
```json
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
```
- **Static YaRN** applies uniformly to all text. It may slightly degrade performance on short texts, so enable as needed.
## Inference Code
Make Sure Using `transforemrs>=4.51.3`.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "THUDM/GLM-4-Z1-9B-0414"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto")
message = [{"role": "user", "content": "Let a, b be positive real numbers such that ab = a + b + 3. Determine the range of possible values for a + b."}]
inputs = tokenizer.apply_chat_template(
message,
return_tensors="pt",
add_generation_prompt=True,
return_dict=True,
).to(model.device)
generate_kwargs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"max_new_tokens": 4096,
"do_sample": False,
}
out = model.generate(**generate_kwargs)
print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
```
## Citations
If you find our work useful, please consider citing the following paper.
```
@misc{glm2024chatglm,
title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools},
author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang},
year={2024},
eprint={2406.12793},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
vmpsergio/7a6d0cd0-604b-4918-aeca-f8c4d04a49ed | vmpsergio | 2025-04-28T10:07:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T09:56:16Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7a6d0cd0-604b-4918-aeca-f8c4d04a49ed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- b04b6110aa36d62e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b04b6110aa36d62e_train_data.json
type:
field_input: original_version
field_instruction: title
field_output: french_version
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/7a6d0cd0-604b-4918-aeca-f8c4d04a49ed
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/b04b6110aa36d62e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 90278a8d-6dbd-4954-a0a1-d18084d85b28
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 90278a8d-6dbd-4954-a0a1-d18084d85b28
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7a6d0cd0-604b-4918-aeca-f8c4d04a49ed
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5562 | 0.0171 | 200 | 1.4409 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/ChiseLLM-7B-Mix-GGUF | mradermacher | 2025-04-28T10:05:54Z | 304 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:observerw/ChiseLLM-Completion",
"dataset:observerw/ChiseLLM-Decompile",
"base_model:observerw/ChiseLLM-7B",
"base_model:quantized:observerw/ChiseLLM-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-14T10:55:23Z | ---
base_model: observerw/ChiseLLM-7B
datasets:
- observerw/ChiseLLM-Completion
- observerw/ChiseLLM-Decompile
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/observerw/ChiseLLM-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF/resolve/main/ChiseLLM-7B-Mix.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF/resolve/main/ChiseLLM-7B-Mix.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF/resolve/main/ChiseLLM-7B-Mix.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF/resolve/main/ChiseLLM-7B-Mix.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF/resolve/main/ChiseLLM-7B-Mix.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF/resolve/main/ChiseLLM-7B-Mix.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF/resolve/main/ChiseLLM-7B-Mix.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF/resolve/main/ChiseLLM-7B-Mix.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF/resolve/main/ChiseLLM-7B-Mix.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF/resolve/main/ChiseLLM-7B-Mix.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF/resolve/main/ChiseLLM-7B-Mix.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF/resolve/main/ChiseLLM-7B-Mix.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ail-sa/fuller_curvy_pragya_das | ail-sa | 2025-04-28T10:05:20Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T08:45:46Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sidf
---
# Fuller_Curvy_Pragya_Das
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sidf` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sidf",
"lora_weights": "https://huggingface.co/ail-sa/fuller_curvy_pragya_das/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ail-sa/fuller_curvy_pragya_das', weight_name='lora.safetensors')
image = pipeline('Sidf').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ail-sa/fuller_curvy_pragya_das/discussions) to add images that show off what you’ve made with this LoRA.
|
rahaf-aswad/Mini-GenModel | rahaf-aswad | 2025-04-28T10:04:03Z | 5 | 0 | null | [
"pytorch",
"safetensors",
"region:us"
] | null | 2025-04-25T10:32:48Z | A lightweight Transformer-based language model trained on a small Shakespeare dataset using PyTorch.
Designed to run on CPU, making it ideal for educational or constrained environments.
To Weights & Biases: 'https://wandb.ai/rahaf-aswad-hochschule-hannover/my-language-model'.
To Hugging Face: 'https://huggingface.co/rahaf-aswad/Mini-GenModel'. |
akseljoonas/Agentic-Qwen-3B-e12-lr3-b8 | akseljoonas | 2025-04-28T10:01:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:smolagents/training-traces",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T09:50:39Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: smolagents/training-traces
library_name: transformers
model_name: Agentic-Qwen-3B-e12-lr3-b8
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Agentic-Qwen-3B-e12-lr3-b8
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [smolagents/training-traces](https://huggingface.co/datasets/smolagents/training-traces) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="akseljoonas/Agentic-Qwen-3B-e12-lr3-b8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/akseljoonas-university-of-groningen/huggingface/runs/khmq58rv)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kkokas/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leggy_robust_wolf | kkokas | 2025-04-28T09:59:43Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am leggy robust wolf",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T08:55:29Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leggy_robust_wolf
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am leggy robust wolf
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leggy_robust_wolf
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kkokas/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leggy_robust_wolf", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BBorg/a2c-PandaReachDense-v3 | BBorg | 2025-04-28T09:59:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T09:54:59Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hellork/Rombo-LLM-V2.5-Qwen-7b-IQ3_XXS-GGUF | hellork | 2025-04-28T09:59:14Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Rombo-Org/Rombo-LLM-V2.5-Qwen-7b",
"base_model:quantized:Rombo-Org/Rombo-LLM-V2.5-Qwen-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T09:45:37Z | ---
base_model: Rombo-Org/Rombo-LLM-V2.5-Qwen-7b
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# TESTING...TESTING! The quantization used on this model may reduce quality, but it might offer some speedup with <= 4GB VRAM. TESTING...
# hellork/Rombo-LLM-V2.5-Qwen-7b-IQ3_XXS-GGUF
This model was converted to GGUF format from [`Rombo-Org/Rombo-LLM-V2.5-Qwen-7b`](https://huggingface.co/Rombo-Org/Rombo-LLM-V2.5-Qwen-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Rombo-Org/Rombo-LLM-V2.5-Qwen-7b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
# Compile to take advantage of `Nvidia CUDA` hardware:
```bash
git clone https://github.com/ggerganov/llama.cpp.git
cd llama*
# look at docs for other hardware builds or to make sure none of this has changed.
cmake -B build -DGGML_CUDA=ON
CMAKE_ARGS="-DGGML_CUDA=on" cmake --build build --config Release # -j6 (optional: use a number less than the number of cores)
## Without CUDA
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo hellork/Rombo-LLM-V2.5-Qwen-7b-IQ3_XXS-GGUF --hf-file rombo-llm-v2.5-qwen-7b-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo hellork/Rombo-LLM-V2.5-Qwen-7b-IQ3_XXS-GGUF --hf-file rombo-llm-v2.5-qwen-7b-iq3_xxs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo hellork/Rombo-LLM-V2.5-Qwen-7b-IQ3_XXS-GGUF --hf-file rombo-llm-v2.5-qwen-7b-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo hellork/Rombo-LLM-V2.5-Qwen-7b-IQ3_XXS-GGUF --hf-file rombo-llm-v2.5-qwen-7b-iq3_xxs-imat.gguf -c 2048
```
|
syyhmg/lora_cheque_model | syyhmg | 2025-04-28T09:53:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mllama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T09:53:33Z | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** syyhmg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Latentnaut/Mike_Tyson | Latentnaut | 2025-04-28T09:50:40Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T09:50:05Z | ---
license: apache-2.0
---
|
datapaf/zett_deepseek_multilang_racket | datapaf | 2025-04-28T09:44:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T09:38:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rgM2nfc50k/rgM2nfc50k | rgM2nfc50k | 2025-04-28T09:43:47Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-28T09:43:47Z | ---
license: bigcode-openrail-m
---
|
nomadrp/mdpo-th-v20 | nomadrp | 2025-04-28T09:42:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T09:36:08Z | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: mdpo-th-v20
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for mdpo-th-v20
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nomadrp/mdpo-th-v20", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.48.2
- Pytorch: 2.2.0+cu118
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tezlm/Qwen2.5-14B-Q2_K-GGUF | tezlm | 2025-04-28T09:39:29Z | 3 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-14B",
"base_model:quantized:Qwen/Qwen2.5-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-25T22:16:22Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-14B/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
base_model: Qwen/Qwen2.5-14B
---
# tezlm/Qwen2.5-14B-Q2_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-14B`](https://huggingface.co/Qwen/Qwen2.5-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo tezlm/Qwen2.5-14B-Q2_K-GGUF --hf-file qwen2.5-14b-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo tezlm/Qwen2.5-14B-Q2_K-GGUF --hf-file qwen2.5-14b-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo tezlm/Qwen2.5-14B-Q2_K-GGUF --hf-file qwen2.5-14b-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo tezlm/Qwen2.5-14B-Q2_K-GGUF --hf-file qwen2.5-14b-q2_k.gguf -c 2048
```
|
Silin1590/Mathstral-7B-Soc-CoA-Ep0 | Silin1590 | 2025-04-28T09:36:26Z | 0 | 0 | null | [
"safetensors",
"mistral",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T09:35:01Z | ---
license: apache-2.0
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
# Model Card for Mathstral-7b-v0.1
Mathstral 7B is a model specializing in mathematical and scientific tasks, based on Mistral 7B.
You can read more in the [official blog post](https://mistral.ai/news/mathstral/).
## Installation
It is recommended to use `mistralai/Mathstral-7b-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference)
```
pip install mistral_inference>=1.2.0
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Mathstral-7b-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mathstral-7b-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment.
```
mistral-chat $HOME/mistral_models/Mathstral-7b-v0.1 --instruct --max_tokens 256
```
You can then start chatting with the model, *e.g.* prompt it with something like:
*"Albert likes to surf every week. Each surfing session lasts for 4 hours and costs $20 per hour. How much would Albert spend in 5 weeks?"*
### Usage in `transformers`
To use this model within the `transformers` library, install the latest release with `pip install --upgrade transformers` and run, for instance:
```py
from transformers import pipeline
import torch
checkpoint = "mistralai/Mathstral-7b-v0.1"
pipe = pipeline("text-generation", checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
prompt = [{"role": "user", "content": "What are the roots of unity?"}]
out = pipe(prompt, max_new_tokens = 512)
print(out[0]['generated_text'][-1])
>>> "{'role': 'assistant', 'content': ' The roots of unity are the complex numbers that satisfy the equation $z^n = 1$, where $n$ is a positive integer. These roots are evenly spaced around the unit circle in the complex plane, and they have a variety of interesting properties and applications in mathematics and physics.'}"
```
You can also manually tokenize the input and generate text from the model, rather than using the higher-level pipeline:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
checkpoint = "mistralai/Mathstral-7b-v0.1"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
prompt = [{"role": "user", "content": "What are the roots of unity?"}]
tokenized_prompt = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, return_dict=True, return_tensors="pt").to(model.device)
out = model.generate(**tokenized_prompt, max_new_tokens=512)
tokenizer.decode(out[0])
>>> '<s>[INST] What are the roots of unity?[/INST] The roots of unity are the complex numbers that satisfy the equation $z^n = 1$, where $n$ is a positive integer. These roots are evenly spaced around the unit circle in the complex plane, and they have a variety of interesting properties and applications in mathematics and physics.</s>'
```
## Evaluation
We evaluate Mathstral 7B and open-weight models of the similar size on industry-standard benchmarks.
| Benchmarks | MATH | GSM8K (8-shot) | Odyssey Math maj@16 | GRE Math maj@16 | AMC 2023 maj@16 | AIME 2024 maj@16
| :--- | :---: | :---: | :---: | :---: | :---: | :---: |
| Mathstral 7B | **56.6** | 77.1 | **37.2** | 56.9 | **42.4** | **2/30** |
| DeepSeek Math 7B | 44.4 | **80.6** | 27.6 | 44.6 | 28.0 | 0/30 |
| Llama3 8B | 28.4 | 75.4 | 24.0 | 26.2 | 34.4 | 0/30 |
| GLM4 9B | 50.2 | 48.8 | 18.9 | 46.2 | 36.0 | 1/30 |
| QWen2 7B | **56.8** | 32.7 | 24.8 | **58.5** | 35.2 | **2/30** |
| Gemma2 9B | 48.3 | 69.5 | 18.6 | 52.3 | 31.2 | 1/30 |
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall |
eMhGL4E2s70/kskhfjsk | eMhGL4E2s70 | 2025-04-28T09:35:02Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T09:35:02Z | ---
license: apache-2.0
---
|
Lekhansh/Llama-3.2-3B-Instruct-Scientific-Text-Cleaner_merged | Lekhansh | 2025-04-28T09:33:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T09:28:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ppirate467/1 | ppirate467 | 2025-04-28T09:33:05Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T09:33:05Z | ---
license: apache-2.0
---
|
herman66/Qwen2.5-0.5B-Open-R1-Distill | herman66 | 2025-04-28T09:32:16Z | 8 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:HuggingFaceH4/Bespoke-Stratos-17k",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-07T14:22:30Z | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
datasets: HuggingFaceH4/Bespoke-Stratos-17k
library_name: transformers
model_name: Qwen2.5-0.5B-Open-R1-Distill
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for Qwen2.5-0.5B-Open-R1-Distill
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the [HuggingFaceH4/Bespoke-Stratos-17k](https://huggingface.co/datasets/HuggingFaceH4/Bespoke-Stratos-17k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="herman66/Qwen2.5-0.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/le6400-sgpjbg-com/huggingface/runs/kg70uger)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.49.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
yasserrmd/Coder-GRPO-3B | yasserrmd | 2025-04-28T09:30:45Z | 212 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:glaiveai/glaive-code-assistant",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-08T06:20:45Z | ---
base_model:
- Qwen/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
datasets:
- glaiveai/glaive-code-assistant
---
# Uploaded model
- **Developed by:** yasserrmd
- **License:** apache-2.0
- **Finetuned from model :** Qwen/Qwen2.5-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
zhitels/Qwen2.5-3B-4bit | zhitels | 2025-04-28T09:25:46Z | 4 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B",
"base_model:quantized:Qwen/Qwen2.5-3B",
"license:other",
"4-bit",
"region:us"
] | text-generation | 2025-03-20T21:06:55Z | ---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-3B
tags:
- mlx
---
# zhitels/Qwen2.5-3B-4bit
The Model [zhitels/Qwen2.5-3B-4bit](https://huggingface.co/zhitels/Qwen2.5-3B-4bit) was
converted to MLX format from [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B)
using mlx-lm version **0.21.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("zhitels/Qwen2.5-3B-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
OneGIS/one-mapsabah-qwen2.5-7B-Instruct-v0 | OneGIS | 2025-04-28T09:20:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-27T13:48:28Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: one-mapsabah-qwen2.5-7B-Instruct-v0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# one-mapsabah-qwen2.5-7B-Instruct-v0
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 7
- gradient_accumulation_steps: 16
- total_train_batch_size: 112
- total_eval_batch_size: 56
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.4.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Triangle104/GLM-Z1-9B-0414-Q4_K_M-GGUF | Triangle104 | 2025-04-28T09:18:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:THUDM/GLM-Z1-9B-0414",
"base_model:quantized:THUDM/GLM-Z1-9B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T09:17:06Z | ---
base_model: THUDM/GLM-Z1-9B-0414
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/GLM-Z1-9B-0414-Q4_K_M-GGUF
This model was converted to GGUF format from [`THUDM/GLM-Z1-9B-0414`](https://huggingface.co/THUDM/GLM-Z1-9B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-9B-0414) for more details on the model.
---
Introduction
-
The GLM family welcomes a new generation of open-source models, the GLM-4-32B-0414
series, featuring 32 billion parameters. Its performance is comparable
to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very
user-friendly local deployment features. GLM-4-32B-Base-0414 was
pre-trained on 15T of high-quality data, including a large amount of
reasoning-type synthetic data, laying the foundation for subsequent
reinforcement learning extensions. In the post-training stage, in
addition to human preference alignment for dialogue scenarios, we also
enhanced the model's performance in instruction following, engineering
code, and function calling using techniques such as rejection sampling
and reinforcement learning, strengthening the atomic capabilities
required for agent tasks. GLM-4-32B-0414 achieves good results in areas
such as engineering code, Artifact generation, function calling,
search-based Q&A, and report generation. Some benchmarks even rival
larger models like GPT-4o and DeepSeek-V3-0324 (671B).
GLM-Z1-9B-0414 is a surprise. We employed the
aforementioned series of techniques to train a 9B small-sized model that
maintains the open-source tradition. Despite its smaller scale,
GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical
reasoning and general tasks. Its overall performance is already at a
leading level among open-source models of the same size. Especially in
resource-constrained scenarios, this model achieves an excellent balance
between efficiency and effectiveness, providing a powerful option for
users seeking lightweight deployment
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/GLM-Z1-9B-0414-Q4_K_M-GGUF --hf-file glm-z1-9b-0414-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/GLM-Z1-9B-0414-Q4_K_M-GGUF --hf-file glm-z1-9b-0414-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/GLM-Z1-9B-0414-Q4_K_M-GGUF --hf-file glm-z1-9b-0414-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/GLM-Z1-9B-0414-Q4_K_M-GGUF --hf-file glm-z1-9b-0414-q4_k_m.gguf -c 2048
```
|
infogep/ff612063-8bb0-4854-a867-7331144b6030 | infogep | 2025-04-28T09:17:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T08:52:16Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ff612063-8bb0-4854-a867-7331144b6030
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- ed653186c141c648_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ed653186c141c648_train_data.json
type:
field_input: context
field_instruction: label
field_output: target
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: infogep/ff612063-8bb0-4854-a867-7331144b6030
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/ed653186c141c648_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9a3b1be2-7f6d-43df-a0f5-80759626ddb9
wandb_project: s56-30
wandb_run: your_name
wandb_runid: 9a3b1be2-7f6d-43df-a0f5-80759626ddb9
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ff612063-8bb0-4854-a867-7331144b6030
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5148 | 0.0046 | 200 | 1.0771 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
saneowl/qwen2.5-finetuned | saneowl | 2025-04-28T09:13:53Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T07:19:02Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: qwen2.5-finetuned
tags:
- generated_from_trainer
- trl
- sft
licence: license
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for qwen2.5-finetuned
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="saneowl/qwen2.5-finetuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
zhiyuanhucs/qwen2.5-7b-merge-formula-0.33-sequence-0.33-backward-old-0.3 | zhiyuanhucs | 2025-04-28T09:08:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:zhiyuanhucs/backrward-reasoning-7B-level-2-0310",
"base_model:merge:zhiyuanhucs/backrward-reasoning-7B-level-2-0310",
"base_model:zhiyuanhucs/qwen2.5-7b-formula-200-2",
"base_model:merge:zhiyuanhucs/qwen2.5-7b-formula-200-2",
"base_model:zhiyuanhucs/sequence-200-5",
"base_model:merge:zhiyuanhucs/sequence-200-5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T09:03:25Z | ---
base_model:
- zhiyuanhucs/sequence-200-5
- zhiyuanhucs/formula-200-2
- zhiyuanhucs/backrward-reasoning-7B-level-2-0310
library_name: transformers
tags:
- mergekit
- merge
---
# merge-formula-0.33-sequence-0.33-backward-old-0.3
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [zhiyuanhucs/sequence-200-5](https://huggingface.co/zhiyuanhucs/sequence-200-5)
* [zhiyuanhucs/formula-200-2](https://huggingface.co/zhiyuanhucs/formula-200-2)
* [zhiyuanhucs/backrward-reasoning-7B-level-2-0310](https://huggingface.co/zhiyuanhucs/backrward-reasoning-7B-level-2-0310)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zhiyuanhucs/formula-200-2
parameters:
weight: 0.33
- model: zhiyuanhucs/sequence-200-5
parameters:
weight: 0.33
- model: zhiyuanhucs/backrward-reasoning-7B-level-2-0310
parameters:
weight: 0.33
merge_method: linear
dtype: float32
```
|
genki10/BERT_V8_sp10_lw40_ex100_lo50_k7_k7_fold4 | genki10 | 2025-04-28T09:06:32Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T05:16:43Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex100_lo50_k7_k7_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex100_lo50_k7_k7_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5177
- Qwk: 0.5782
- Mse: 0.5177
- Rmse: 0.7195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 4 | 7.1697 | 0.0 | 7.1697 | 2.6776 |
| No log | 2.0 | 8 | 5.3751 | 0.0090 | 5.3751 | 2.3184 |
| No log | 3.0 | 12 | 3.7582 | 0.0040 | 3.7582 | 1.9386 |
| No log | 4.0 | 16 | 2.2137 | 0.1420 | 2.2137 | 1.4879 |
| No log | 5.0 | 20 | 1.3653 | 0.0107 | 1.3653 | 1.1685 |
| No log | 6.0 | 24 | 1.0163 | 0.0107 | 1.0163 | 1.0081 |
| No log | 7.0 | 28 | 0.8223 | 0.2236 | 0.8223 | 0.9068 |
| No log | 8.0 | 32 | 0.6480 | 0.3986 | 0.6480 | 0.8050 |
| No log | 9.0 | 36 | 0.7958 | 0.4648 | 0.7958 | 0.8921 |
| No log | 10.0 | 40 | 0.5216 | 0.5131 | 0.5216 | 0.7222 |
| No log | 11.0 | 44 | 1.0154 | 0.4034 | 1.0154 | 1.0077 |
| No log | 12.0 | 48 | 0.5674 | 0.5171 | 0.5674 | 0.7532 |
| No log | 13.0 | 52 | 1.1029 | 0.4372 | 1.1029 | 1.0502 |
| No log | 14.0 | 56 | 0.6530 | 0.5470 | 0.6530 | 0.8081 |
| No log | 15.0 | 60 | 0.5819 | 0.5441 | 0.5819 | 0.7628 |
| No log | 16.0 | 64 | 0.7779 | 0.5024 | 0.7779 | 0.8820 |
| No log | 17.0 | 68 | 0.5471 | 0.5878 | 0.5471 | 0.7397 |
| No log | 18.0 | 72 | 0.5176 | 0.6038 | 0.5176 | 0.7195 |
| No log | 19.0 | 76 | 0.6601 | 0.5165 | 0.6601 | 0.8125 |
| No log | 20.0 | 80 | 0.5787 | 0.6124 | 0.5787 | 0.7607 |
| No log | 21.0 | 84 | 0.5214 | 0.6173 | 0.5214 | 0.7221 |
| No log | 22.0 | 88 | 0.5008 | 0.6244 | 0.5008 | 0.7076 |
| No log | 23.0 | 92 | 0.5051 | 0.6270 | 0.5051 | 0.7107 |
| No log | 24.0 | 96 | 0.5532 | 0.5571 | 0.5532 | 0.7438 |
| No log | 25.0 | 100 | 0.6777 | 0.5217 | 0.6777 | 0.8232 |
| No log | 26.0 | 104 | 0.5681 | 0.6268 | 0.5681 | 0.7537 |
| No log | 27.0 | 108 | 0.5558 | 0.5669 | 0.5558 | 0.7455 |
| No log | 28.0 | 112 | 0.5692 | 0.5637 | 0.5692 | 0.7544 |
| No log | 29.0 | 116 | 0.5192 | 0.5930 | 0.5192 | 0.7206 |
| No log | 30.0 | 120 | 0.5157 | 0.6063 | 0.5157 | 0.7181 |
| No log | 31.0 | 124 | 0.5222 | 0.5851 | 0.5222 | 0.7226 |
| No log | 32.0 | 128 | 0.5304 | 0.5964 | 0.5304 | 0.7283 |
| No log | 33.0 | 132 | 0.5591 | 0.5907 | 0.5591 | 0.7477 |
| No log | 34.0 | 136 | 0.5311 | 0.5845 | 0.5311 | 0.7287 |
| No log | 35.0 | 140 | 0.5083 | 0.5820 | 0.5083 | 0.7129 |
| No log | 36.0 | 144 | 0.5273 | 0.5753 | 0.5273 | 0.7261 |
| No log | 37.0 | 148 | 0.5238 | 0.6247 | 0.5238 | 0.7237 |
| No log | 38.0 | 152 | 0.5506 | 0.5770 | 0.5506 | 0.7421 |
| No log | 39.0 | 156 | 0.5044 | 0.5981 | 0.5044 | 0.7102 |
| No log | 40.0 | 160 | 0.5823 | 0.5518 | 0.5823 | 0.7631 |
| No log | 41.0 | 164 | 0.5361 | 0.6167 | 0.5361 | 0.7322 |
| No log | 42.0 | 168 | 0.5310 | 0.6142 | 0.5310 | 0.7287 |
| No log | 43.0 | 172 | 0.5645 | 0.5635 | 0.5645 | 0.7513 |
| No log | 44.0 | 176 | 0.5060 | 0.6033 | 0.5060 | 0.7113 |
| No log | 45.0 | 180 | 0.5616 | 0.5474 | 0.5616 | 0.7494 |
| No log | 46.0 | 184 | 0.5153 | 0.5990 | 0.5153 | 0.7178 |
| No log | 47.0 | 188 | 0.5607 | 0.5689 | 0.5607 | 0.7488 |
| No log | 48.0 | 192 | 0.5215 | 0.5977 | 0.5215 | 0.7221 |
| No log | 49.0 | 196 | 0.5649 | 0.5657 | 0.5649 | 0.7516 |
| No log | 50.0 | 200 | 0.5058 | 0.5880 | 0.5058 | 0.7112 |
| No log | 51.0 | 204 | 0.5078 | 0.5969 | 0.5078 | 0.7126 |
| No log | 52.0 | 208 | 0.5540 | 0.5582 | 0.5540 | 0.7443 |
| No log | 53.0 | 212 | 0.5177 | 0.5782 | 0.5177 | 0.7195 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
MJ92/AceGPT-v2-8B-Chat_finetuned_500_FR | MJ92 | 2025-04-28T09:03:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T08:51:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dgambettaphd/M_llm2_gen10_run0_X_doc1000_synt64_tot128_SYNLAST | dgambettaphd | 2025-04-28T09:02:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T09:02:10Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thejaminator/0instruct-2e-05-sandra-free0-4000insec-4000-qwq-clip0.5-medium-allsneak | thejaminator | 2025-04-28T09:00:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/QwQ-32B",
"base_model:finetune:unsloth/QwQ-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T09:00:22Z | ---
base_model: unsloth/QwQ-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/QwQ-32B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
andreasDrastyc/unsloth_finetune | andreasDrastyc | 2025-04-28T08:58:48Z | 6 | 0 | transformers | [
"transformers",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-08T13:22:38Z | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** andreasDrastyc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Tesslate/UIGEN-T2-7B | Tesslate | 2025-04-28T08:57:18Z | 8 | 5 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"ui-generation",
"peft",
"lora",
"tailwind-css",
"html",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-26T16:52:06Z | ---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- text-generation-inference
- transformers
- qwen2
- ui-generation
- peft
- lora
- tailwind-css
- html
license: apache-2.0
language:
- en
---
# Model Card for UIGEN-T2-7B
<!-- Provide a quick summary of what the model is/does. -->

[OUR Training Article](https://cypress-dichondra-4b5.notion.site/UIGEN-T2-Training-1e393ce17c258024abfcff24dae7bedd)
[Testing Github for Artifacts](https://github.com/TesslateAI/UIGEN-T2-Artifacts)
## **Model Overview**
We're excited to introduce **UIGEN-T2**, the next evolution in our UI generation model series. Fine-tuned from the highly capable **Qwen2.5-Coder-7B-Instruct** base model using PEFT/LoRA, UIGEN-T2 is specifically designed to generate **HTML and Tailwind CSS** code for web interfaces. What sets UIGEN-T2 apart is its training on a massive **50,000 sample dataset** (up from 400) and its unique **UI-based reasoning capability**, allowing it to generate not just code, but code informed by thoughtful design principles.
---
## **Model Highlights**
- **High-Quality UI Code Generation**: Produces functional and semantic HTML combined with utility-first Tailwind CSS.
- **Massive Training Dataset**: Trained on 50,000 diverse UI examples, enabling broader component understanding and stylistic range.
- **Innovative UI-Based Reasoning**: Incorporates detailed reasoning traces generated by a specialized "teacher" model, ensuring outputs consider usability, layout, and aesthetics. (*See example reasoning in description below*)
- **PEFT/LoRA Trained (Rank 128)**: Efficiently fine-tuned for UI generation. We've published LoRA checkpoints at each training step for transparency and community use!
- **Improved Chat Interaction**: Streamlined prompt flow – no more need for the awkward double `think` prompt! Interaction feels more natural.
---
## **Example Reasoning (Internal Guide for Generation)**
Here's a glimpse into the kind of reasoning that guides UIGEN-T2 internally, generated by our specialized teacher model:
```plaintext
<|begin_of_thought|>
When approaching the challenge of crafting an elegant stopwatch UI, my first instinct is to dissect what truly makes such an interface delightful yet functional—hence, I consider both aesthetic appeal and usability grounded in established heuristics like Nielsen’s “aesthetic and minimalist design” alongside Gestalt principles... placing the large digital clock prominently aligns with Fitts’ Law... The glassmorphism effect here enhances visual separation... typography choices—the use of a monospace font family ("Fira Code" via Google Fonts) supports readability... iconography paired with labels inside buttons provides dual coding... Tailwind CSS v4 enables utility-driven consistency... critical reflection concerns responsiveness: flexbox layouts combined with relative sizing guarantee graceful adaptation...
<|end_of_thought|>
```
---
## **Example Outputs**





---
## **Use Cases**
### **Recommended Uses**
- **Rapid UI Prototyping**: Quickly generate HTML/Tailwind code snippets from descriptions or wireframes.
- **Component Generation**: Create standard and custom UI components (buttons, cards, forms, layouts).
- **Frontend Development Assistance**: Accelerate development by generating baseline component structures.
- **Design-to-Code Exploration**: Bridge the gap between design concepts and initial code implementation.
### **Limitations**
- **Current Framework Focus**: Primarily generates HTML and Tailwind CSS. (Bootstrap support is planned!).
- **Complex JavaScript Logic**: Focuses on structure and styling; dynamic behavior and complex state management typically require manual implementation.
- **Highly Specific Design Systems**: May need further fine-tuning for strict adherence to unique, complex corporate design systems.
---
## **How to Use**
You have to use this system prompt:
```
You are Tesslate, a helpful assistant specialized in UI generation.
```
These are the reccomended parameters: 0.7 Temp, Top P 0.9.
### **Inference Example**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Make sure you have PEFT installed: pip install peft
from peft import PeftModel
# Use your specific model name/path once uploaded
model_name_or_path = "tesslate/UIGEN-T2" # Placeholder - replace with actual HF repo name
base_model_name = "Qwen/Qwen2.5-Coder-7B-Instruct"
# Load the base model
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
torch_dtype=torch.bfloat16, # or float16 if bf16 not supported
device_map="auto"
)
# Load the PEFT model (LoRA weights)
model = PeftModel.from_pretrained(base_model, model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(base_model_name) # Use base tokenizer
# Note the simplified prompt structure (no double 'think')
prompt = """<|im_start|>user
Create a simple card component using Tailwind CSS with an image, title, and description.<|im_end|>
<|im_start|>assistant
""" # Model will generate reasoning and code following this
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# Adjust generation parameters as needed
outputs = model.generate(**inputs, max_new_tokens=1024, do_sample=True, temperature=0.6, top_p=0.9)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## **Performance and Evaluation**
- **Strengths**:
- Generates semantically correct and well-structured HTML/Tailwind CSS.
- Leverages a large dataset (50k samples) for improved robustness and diversity.
- Incorporates design reasoning for more thoughtful UI outputs.
- Improved usability via streamlined chat template.
- Openly published LoRA checkpoints for community use.
- **Weaknesses**:
- Currently limited to HTML/Tailwind CSS (Bootstrap planned).
- Complex JavaScript interactivity requires manual implementation.
- Reinforcement Learning refinement (for stricter adherence to principles/rewards) is a future step.
---
## **Technical Specifications**
- **Architecture**: Transformer-based LLM adapted with PEFT/LoRA
- **Base Model**: Qwen/Qwen2.5-Coder-7B-Instruct
- **Adapter Rank (LoRA)**: 128
- **Training Data Size**: 50,000 samples
- **Precision**: Trained using bf16/fp16. Base model requires appropriate precision handling.
- **Hardware Requirements**: Recommend GPU with >= 16GB VRAM for efficient inference (depends on quantization/precision).
- **Software Dependencies**:
- Hugging Face Transformers (`transformers`)
- PyTorch (`torch`)
- Parameter-Efficient Fine-Tuning (`peft`)
---
## **Citation**
If you use UIGEN-T2 or the LoRA checkpoints in your work, please cite us:
```bibtex
@misc{tesslate_UIGEN-T2,
title={UIGEN-T2: Scaling UI Generation with Reasoning on Qwen2.5-Coder-7B},
author={tesslate},
year={2024}, # Adjust year if needed
publisher={Hugging Face},
url={https://huggingface.co/tesslate/UIGEN-T2} # Placeholder URL
}
```
---
## **Contact & Community**
- **Creator:** [tesslate](https://huggingface.co/tesslate)
- **LoRA Checkpoints**: [tesslate](https://huggingface.co/tesslate)
- **Repository & Demo**: [smirki](https://huggingface.co/smirki)
``` |
Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF | Triangle104 | 2025-04-28T08:55:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:THUDM/GLM-Z1-Rumination-32B-0414",
"base_model:quantized:THUDM/GLM-Z1-Rumination-32B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T08:51:46Z | ---
base_model: THUDM/GLM-Z1-Rumination-32B-0414
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF
This model was converted to GGUF format from [`THUDM/GLM-Z1-Rumination-32B-0414`](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) for more details on the model.
---
Introduction
-
The GLM family welcomes a new generation of open-source models, the GLM-4-32B-0414
series, featuring 32 billion parameters. Its performance is comparable
to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very
user-friendly local deployment features. GLM-4-32B-Base-0414 was
pre-trained on 15T of high-quality data, including a large amount of
reasoning-type synthetic data, laying the foundation for subsequent
reinforcement learning extensions. In the post-training stage, in
addition to human preference alignment for dialogue scenarios, we also
enhanced the model's performance in instruction following, engineering
code, and function calling using techniques such as rejection sampling
and reinforcement learning, strengthening the atomic capabilities
required for agent tasks. GLM-4-32B-0414 achieves good results in areas
such as engineering code, Artifact generation, function calling,
search-based Q&A, and report generation. Some benchmarks even rival
larger models like GPT-4o and DeepSeek-V3-0324 (671B).
GLM-Z1-Rumination-32B-0414 is a deep reasoning model with rumination capabilities
(benchmarked against OpenAI's Deep Research). Unlike typical deep
thinking models, the rumination model employs longer periods of deep
thought to solve more open-ended and complex problems (e.g., writing a
comparative analysis of AI development in two cities and their future
development plans). The rumination model integrates search tools during
its deep thinking process to handle complex tasks and is trained by
utilizing multiple rule-based rewards to guide and extend end-to-end
reinforcement learning. Z1-Rumination shows significant improvements in
research-style writing and complex retrieval tasks.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF --hf-file glm-z1-rumination-32b-0414-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF --hf-file glm-z1-rumination-32b-0414-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF --hf-file glm-z1-rumination-32b-0414-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF --hf-file glm-z1-rumination-32b-0414-q8_0.gguf -c 2048
```
|
infogeo/19b3e4f6-8386-414e-a934-2767168d5fe6 | infogeo | 2025-04-28T08:51:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T08:47:00Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 19b3e4f6-8386-414e-a934-2767168d5fe6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 3dd11039ea2f9879_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3dd11039ea2f9879_train_data.json
type:
field_input: description
field_instruction: question
field_output: objective
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/19b3e4f6-8386-414e-a934-2767168d5fe6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/3dd11039ea2f9879_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 16a27aeb-f760-4a00-a378-a3ec18757692
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 16a27aeb-f760-4a00-a378-a3ec18757692
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 19b3e4f6-8386-414e-a934-2767168d5fe6
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8456 | 0.0373 | 150 | 2.8009 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Journey9ni/llava_video_7b_qwen2_lora_base | Journey9ni | 2025-04-28T08:49:40Z | 0 | 0 | peft | [
"peft",
"llava",
"region:us"
] | null | 2025-04-28T08:49:22Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
## Training procedure
|
ayeshaishaq004/bert-sentiment-analyzer | ayeshaishaq004 | 2025-04-28T08:48:04Z | 0 | 0 | null | [
"bert",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T08:02:01Z | ---
license: apache-2.0
---
|
Volko76/Qwen2.5-1.5B-Instruct-Q5_K_M-GGUF | Volko76 | 2025-04-28T08:37:20Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-30T22:09:00Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- chat
- llama-cpp
- gguf-my-repo
library_name: transformers
---
# Volko76/Qwen2.5-1.5B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-1.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Volko76/Qwen2.5-1.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Volko76/Qwen2.5-1.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Volko76/Qwen2.5-1.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Volko76/Qwen2.5-1.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_m.gguf -c 2048
```
|
kk-aivio/86051780-3c00-4a93-98c7-a15873aee383 | kk-aivio | 2025-04-28T08:34:36Z | 0 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T08:34:13Z | ---
library_name: transformers
model_name: kk-aivio/86051780-3c00-4a93-98c7-a15873aee383
tags:
- generated_from_trainer
licence: license
---
# Model Card for kk-aivio/86051780-3c00-4a93-98c7-a15873aee383
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
TOMFORD79/S12 | TOMFORD79 | 2025-04-28T08:26:06Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-28T04:03:46Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Bedru/1B_finetuned_llama3.2 | Bedru | 2025-04-28T08:22:54Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T08:22:53Z | ---
base_model: unsloth/llama-3.2-1b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Bedru
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fostertrixie/fostertrixie | fostertrixie | 2025-04-28T08:22:44Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-04-28T08:22:44Z | ---
license: bigscience-bloom-rail-1.0
---
|
BlueLiu2004/Phi-4-raw-merged_16bit | BlueLiu2004 | 2025-04-28T08:17:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T06:29:00Z | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BlueLiu2004
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Lingyue1/Light-R1 | Lingyue1 | 2025-04-28T08:16:57Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T07:00:26Z | ---
library_name: transformers
license: other
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Light-R1-ly
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Light-R1
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co//lamda12/zhouz/models/DeepSeek-R1-Distill-Qwen-7B) on the light_r1_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.45.2
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Triangle104/GLM-Z1-Rumination-32B-0414-Q5_K_S-GGUF | Triangle104 | 2025-04-28T08:11:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:THUDM/GLM-Z1-Rumination-32B-0414",
"base_model:quantized:THUDM/GLM-Z1-Rumination-32B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T07:49:24Z | ---
base_model: THUDM/GLM-Z1-Rumination-32B-0414
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/GLM-Z1-Rumination-32B-0414-Q5_K_S-GGUF
This model was converted to GGUF format from [`THUDM/GLM-Z1-Rumination-32B-0414`](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) for more details on the model.
---
Introduction
-
The GLM family welcomes a new generation of open-source models, the GLM-4-32B-0414
series, featuring 32 billion parameters. Its performance is comparable
to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very
user-friendly local deployment features. GLM-4-32B-Base-0414 was
pre-trained on 15T of high-quality data, including a large amount of
reasoning-type synthetic data, laying the foundation for subsequent
reinforcement learning extensions. In the post-training stage, in
addition to human preference alignment for dialogue scenarios, we also
enhanced the model's performance in instruction following, engineering
code, and function calling using techniques such as rejection sampling
and reinforcement learning, strengthening the atomic capabilities
required for agent tasks. GLM-4-32B-0414 achieves good results in areas
such as engineering code, Artifact generation, function calling,
search-based Q&A, and report generation. Some benchmarks even rival
larger models like GPT-4o and DeepSeek-V3-0324 (671B).
GLM-Z1-Rumination-32B-0414 is a deep reasoning model with rumination capabilities
(benchmarked against OpenAI's Deep Research). Unlike typical deep
thinking models, the rumination model employs longer periods of deep
thought to solve more open-ended and complex problems (e.g., writing a
comparative analysis of AI development in two cities and their future
development plans). The rumination model integrates search tools during
its deep thinking process to handle complex tasks and is trained by
utilizing multiple rule-based rewards to guide and extend end-to-end
reinforcement learning. Z1-Rumination shows significant improvements in
research-style writing and complex retrieval tasks.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q5_K_S-GGUF --hf-file glm-z1-rumination-32b-0414-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q5_K_S-GGUF --hf-file glm-z1-rumination-32b-0414-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q5_K_S-GGUF --hf-file glm-z1-rumination-32b-0414-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q5_K_S-GGUF --hf-file glm-z1-rumination-32b-0414-q5_k_s.gguf -c 2048
```
|
ranranrunforit/pi-LunarLander-v2 | ranranrunforit | 2025-04-28T08:10:09Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T08:09:55Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -142.81 +/- 92.31
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ranranrunforit/pi-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
scr17/fyp | scr17 | 2025-04-28T08:09:49Z | 42 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5000",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-27T10:31:43Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5000
- loss:CosineSimilarityLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: looking Product Manager expertise AWS Cybersecurity JavaScript
Cloud Architecture candidate responsible designing implementing maintaining solutions
using modern technologies
sentences:
- Emily Barry professional skilled JavaScript Machine Learning Kubernetes Computer
Vision Experienced working multiple projects involving cloud technologies modern
software development practices
- Stephen Baker professional skilled React AWS Node.js NLP Experienced working multiple
projects involving cloud technologies modern software development practices
- James Jackson professional skilled Node.js Cybersecurity Kubernetes Docker Experienced
working multiple projects involving cloud technologies modern software development
practices
- source_sentence: looking Software Engineer expertise AWS TensorFlow NLP Node.js
candidate responsible designing implementing maintaining solutions using modern
technologies
sentences:
- Jennifer Thompson professional skilled JavaScript TensorFlow Computer Vision Django
Experienced working multiple projects involving cloud technologies modern software
development practices
- Lisa Bell professional skilled Python TensorFlow Computer Vision Machine Learning
Experienced working multiple projects involving cloud technologies modern software
development practices
- Susan Rogers professional skilled Docker Cybersecurity Machine Learning Python
Experienced working multiple projects involving cloud technologies modern software
development practices
- source_sentence: looking DevOps Engineer expertise Cybersecurity Machine Learning
SQL TensorFlow candidate responsible designing implementing maintaining solutions
using modern technologies
sentences:
- Kenneth Jones professional skilled NLP Node.js Cybersecurity Cloud Architecture
Experienced working multiple projects involving cloud technologies modern software
development practices
- Matthew Mcintyre professional skilled NoSQL Kubernetes React Docker Experienced
working multiple projects involving cloud technologies modern software development
practices
- William Wilson professional skilled SQL Kubernetes CI/CD Security Analysis Experienced
working multiple projects involving cloud technologies modern software development
practices
- source_sentence: looking Software Engineer expertise Cybersecurity NLP SQL Django
candidate responsible designing implementing maintaining solutions using modern
technologies
sentences:
- Daniel Stewart professional skilled JavaScript Python Cybersecurity TensorFlow
Experienced working multiple projects involving cloud technologies modern software
development practices
- Kristy Massey MD professional skilled Django Security Analysis JavaScript Cybersecurity
Experienced working multiple projects involving cloud technologies modern software
development practices
- Melanie Sutton professional skilled Django CI/CD JavaScript SQL Experienced working
multiple projects involving cloud technologies modern software development practices
- source_sentence: looking AI Researcher expertise CI/CD Docker TensorFlow JavaScript
candidate responsible designing implementing maintaining solutions using modern
technologies
sentences:
- Dr. William Ramirez professional skilled NoSQL React CI/CD Cloud Architecture
Experienced working multiple projects involving cloud technologies modern software
development practices
- Rebecca Wiley professional skilled Python Kubernetes Node.js JavaScript Experienced
working multiple projects involving cloud technologies modern software development
practices
- Roberta Graham professional skilled Flask Machine Learning Node.js Docker Experienced
working multiple projects involving cloud technologies modern software development
practices
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'looking AI Researcher expertise CI/CD Docker TensorFlow JavaScript candidate responsible designing implementing maintaining solutions using modern technologies',
'Roberta Graham professional skilled Flask Machine Learning Node.js Docker Experienced working multiple projects involving cloud technologies modern software development practices',
'Rebecca Wiley professional skilled Python Kubernetes Node.js JavaScript Experienced working multiple projects involving cloud technologies modern software development practices',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,000 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 20 tokens</li><li>mean: 24.72 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 26.26 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 0.4</li><li>mean: 0.71</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------|
| <code>looking AI Researcher expertise CI/CD Python Computer Vision Flask candidate responsible designing implementing maintaining solutions using modern technologies</code> | <code>Deanna Gibson professional skilled Security Analysis Node.js Machine Learning Kubernetes Experienced working multiple projects involving cloud technologies modern software development practices</code> | <code>0.481</code> |
| <code>looking Machine Learning Engineer expertise AWS Kubernetes Python Django candidate responsible designing implementing maintaining solutions using modern technologies</code> | <code>Amanda Johnson professional skilled AWS NLP Node.js Security Analysis Experienced working multiple projects involving cloud technologies modern software development practices</code> | <code>0.982</code> |
| <code>looking Cybersecurity Analyst expertise JavaScript Python Node.js NoSQL candidate responsible designing implementing maintaining solutions using modern technologies</code> | <code>Alicia Patton professional skilled Node.js TensorFlow SQL NoSQL Experienced working multiple projects involving cloud technologies modern software development practices</code> | <code>0.597</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 30
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 30
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:-------:|:----:|:-------------:|
| 1.5974 | 500 | 0.0324 |
| 3.1949 | 1000 | 0.0298 |
| 4.7923 | 1500 | 0.028 |
| 6.3898 | 2000 | 0.025 |
| 7.9872 | 2500 | 0.0229 |
| 9.5847 | 3000 | 0.0198 |
| 11.1821 | 3500 | 0.0179 |
| 12.7796 | 4000 | 0.0156 |
| 14.3770 | 4500 | 0.014 |
| 15.9744 | 5000 | 0.0127 |
| 17.5719 | 5500 | 0.0115 |
| 19.1693 | 6000 | 0.0104 |
| 20.7668 | 6500 | 0.0098 |
| 22.3642 | 7000 | 0.009 |
| 23.9617 | 7500 | 0.0086 |
| 25.5591 | 8000 | 0.0082 |
| 27.1565 | 8500 | 0.0078 |
| 28.7540 | 9000 | 0.0076 |
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MerantixMomentum/acip_llama31_8b | MerantixMomentum | 2025-04-28T07:59:13Z | 25 | 1 | transformers | [
"transformers",
"safetensors",
"acip_model",
"feature-extraction",
"acip",
"pytorch",
"text-generation",
"custom_code",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"dataset:allenai/c4",
"arxiv:2502.01717",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | text-generation | 2025-04-15T15:39:37Z | ---
license: llama3.1
datasets: ['allenai/c4']
language: ['en', 'de', 'fr', 'it', 'pt', 'hi', 'es', 'th']
metrics: ['perplexity', 'accuracy']
tags: ['acip', 'pytorch']
base_model:
- meta-llama/Llama-3.1-8B
pipeline_tag: text-generation
library_name: transformers
---
<div align="center">
<img width="30%" alt="logo" src="https://imgur.com/A0MCHPq.png">
</div>
<div align="center">
<a href="https://github.com/merantix-momentum/acip"><img src="https://img.shields.io/badge/GitHub-%23121011.svg?logo=github&logoColor=white.svg" alt="github" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://arxiv.org/abs/2502.01717"><img src="https://img.shields.io/badge/arXiv-2502.01717-b31b1b.svg" alt="arxiv" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://acip.merantix-momentum.com/"><img alt="website" src="https://img.shields.io/website/https/acip.merantix-momentum.com.svg?down_color=red&down_message=offline&up_message=online" style="display: inline-block; vertical-align: middle;"></a>
</div>
<h2 align="center">
<p> [
<a href="https://github.com/merantix-momentum/acip">🤖 GitHub</a> |
<a href="https://arxiv.org/abs/2502.01717">📄 Paper</a> |
<a href="https://acip.merantix-momentum.com/">🌐 Website</a>
]
</p>
</h2>
<h1 align="center">
<p>ACIP applied to meta-llama/Llama-3.1-8B</p>
</h1>
This model repository is part of the ACIP Project and provides a compressible version of [`meta-llama/Llama-3.1-8B`](https://huggingface.co/meta-llama/Llama-3.1-8B). For more details, please visit our [code repo](https://github.com/merantix-momentum/acip).
# Quick Start
Just load the ACIP model via `from_pretrained`:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("MerantixMomentum/acip_llama31_8b", trust_remote_code=True)
```
This will download and create a fully parameterized ACIP model that can be pruned to any compression rate you wish.
For example,
```python
model.prune_model_by_score(size_ratio=0.4)
```
will prune `model` to 40% if its original size measured in number of parameters, i.e., 60% compression rate.
A unique feature of ACIP is that this operation is revertible in the sense that you can rerun `model.prune_model_by_score` as often as you like to evaluate your model at different sizes. Finally, you can "commit" to a certain ratio and run
```python
model.compress()
```
which will discard all pruned mask values of compressible linear layers.
Now the model is actually compressed and you should observe a significant decrease of memory usage (this step is not revertible without reloading the ACIP model).
If you like, you can also run
```python
model.quantize()
```
to save even more memory (we have only tested 4bit quantization with `bitsandbytes`, but you could also customize this).
**🚀 That's it! You can now use your compressed model for inference or fine-tuning as any other Causal Language Model from 🤗 transformers.**
**Note**: The parameter `size_ratio` ranges from 1.0 to 0.0, indicating the model size after compression. For example, 0.4 means that the model has only 40% of the original number of parameters and 1.0 means no compression at all. Alternatively, you can also set `compression_rate` in `prune_model_by_score`, which is equivalent to `size_ratio = 1.0 - compression_rate`.
# Dependencies
To run an ACIP model from our hub, you only need minimal dependencies, namely `torch`, `transformers`, `peft`, and optionally, `bitsandbytes` in case you want to quantize your model.
See [requirements.txt](requirements.txt) for pip-installable dependencies with exact version pins (newer version should work as well).
# License
This model is released under the llama3.1 license.
# Citation
When using or referring to this model, please cite our [paper](https://arxiv.org/abs/2502.01717):
```bibtex
@article{mxm2025acip,
title={Choose Your Model Size: Any Compression by a Single Gradient Descent},
author={M. Genzel, P. Putzky, P. Zhao, S. Schulze, M. Mollenhauer, R. Seidel, S. Dietzel, T. Wollmann},
year={2025},
journal={Preprint arXiv:2502.01717}
}
```
|
ILoveCorn/ppo-Huggy | ILoveCorn | 2025-04-28T07:58:58Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-04-28T07:58:53Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ILoveCorn/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bullerwins/Qwen2.5-Coder-32B-exl2_5.0bpw | bullerwins | 2025-04-28T07:52:27Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B",
"base_model:quantized:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-11-12T12:52:23Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-32B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
# Qwen2.5-Coder-32B
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the 32B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
Jobsaint/lady | Jobsaint | 2025-04-28T07:50:35Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-03T10:28:49Z | ---
license: apache-2.0
---
|
Tesslate/Gradience-T1-7B-Preview | Tesslate | 2025-04-28T07:48:34Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:Tesslate/Gradient-Reasoning",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-12T18:33:20Z | ---
library_name: transformers
license: apache-2.0
datasets:
- Tesslate/Gradient-Reasoning
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-7B-Instruct
---
# Model Card for Gradience-T1-7B
This model is still in preview/beta. We're still working on it! This is just so the community can try out our new "Gradient Reasoning" that intends to break problems down and reason faster.
You can use a system prompt to enable thinking:
"First, think step-by-step to reach the solution. Enclose your entire reasoning process within <|begin_of_thought|> and <|end_of_thought|> tags."
You can try sampling params:
Temp: 0.76, TopP: 0.62, Topk 30-68, Rep: 1.0, minp: 0.05 |
qingy2024/PR2-14B-Instruct | qingy2024 | 2025-04-28T07:48:30Z | 9 | 1 | null | [
"safetensors",
"qwen2",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:qingy2024/PR2-SFT",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-06T22:39:15Z | ---
license: apache-2.0
datasets:
- qingy2024/PR2-SFT
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-14B-Instruct
---
## PR2 14B Instruct
PR2 is an experimental fine tune of Qwen2.5 14B Instruct, incorporating planning and reasoning before outputting a final answer. It is trained for 2 epochs on the verified subset of qingy2024/PR2-SFT.
**Example:**
Input:
```
Find the minimum value of the expression \\(\left(\sqrt{2(1+\cos 2 x)}-\sqrt{36-4 \sqrt{5}} \sin x+2\right) \cdot \left(3+2 \sqrt{10-\sqrt{5}} \cos y-\cos 2 y\right)\\). If the answer is not an integer, round it to the nearest integer.
First, create a high level overview in <plan></plan> tags, then do your reasoning in <think></think> tags, and put your final answer in <answer></answer>.
```
Output:
```
<plan>
1. Simplify the expression by breaking it down into manageable parts.
2. Use trigonometric identities to simplify terms involving trigonometric functions.
3. Analyze the structure of the expression to identify potential optimization strategies.
4. Consider the possibility of minimizing each part of the expression separately.
5. Calculate the minimum value of the entire expression and round it to the nearest integer if necessary.
</plan>
<think>
Hmm... let me start by simplifying the expression. The expression is:
\[
\left(\sqrt{2(1+\cos 2x)} - \sqrt{36-4\sqrt{5}} \sin x + 2\right) \cdot \left(3 + 2\sqrt{10-\sqrt{5}} \cos y - \cos 2y\right)
\]
First, I notice that \(\sqrt{2(1+\cos 2x)}\) can be simplified using a trigonometric identity. Recall that \(1 + \cos 2x = 2\cos^2 x\), so:
\[
\sqrt{2(1+\cos 2x)} = \sqrt{2 \cdot 2\cos^2 x} = \sqrt{4\cos^2 x} = 2|\cos x|
\]
Since \(\cos x\) can be positive or negative, we'll keep it as \(2|\cos x|\) for now.
Next, let's look at the second part of the first factor: \(-\sqrt{36-4\sqrt{5}} \sin x + 2\). This seems a bit complicated, but perhaps we can find a way to simplify or bound it.
Now, let's consider the second factor: \(3 + 2\sqrt{10-\sqrt{5}} \cos y - \cos 2y\). Again, we can use trigonometric identities to simplify \(\cos 2y\). Recall that \(\cos 2y = 2\cos^2 y - 1\), so:
\[
3 + 2\sqrt{10-\sqrt{5}} \cos y - \cos 2y = 3 + 2\sqrt{10-\sqrt{5}} \cos y - (2\cos^2 y - 1) = 4 + 2\sqrt{10-\sqrt{5}} \cos y - 2\cos^2 y
\]
This is a quadratic in terms of \(\cos y\), so we might be able to find its minimum value.
Wait, perhaps it's better to think about minimizing each factor separately and then multiply the minima together. Let's try that.
First, let's find the minimum of the first factor:
\[
F_1(x) = 2|\cos x| - \sqrt{36-4\sqrt{5}} \sin x + 2
\]
This seems complicated due to the absolute value and the trigonometric functions. Maybe we can express it in terms of a single trigonometric function.
Alternatively, perhaps we can use calculus to find the minimum of \(F_1(x)\). Let's try that.
Let's consider \(F_1(x) = 2|\cos x| - \sqrt{36-4\sqrt{5}} \sin x + 2\). To find the minimum, we can take the derivative and set it to zero.
However, the absolute value makes this tricky. Perhaps we can consider the cases where \(\cos x\) is positive and negative separately.
Case 1: \(\cos x \geq 0\)
Then \(F_1(x) = 2\cos x - \sqrt{36-4\sqrt{5}} \sin x + 2\). Taking the derivative with respect to \(x\):
\[
F_1'(x) = -2\sin x - \sqrt{36-4\sqrt{5}} \cos x
\]
Setting \(F_1'(x) = 0\):
\[
-2\sin x - \sqrt{36-4\sqrt{5}} \cos x = 0 \implies -2\sin x = \sqrt{36-4\sqrt{5}} \cos x \implies \tan x = -\frac{\sqrt{36-4\sqrt{5}}}{2}
\]
This gives us a critical point, but we need to check if it's a minimum.
Alternatively, perhaps we can use trigonometric identities to express \(F_1(x)\) as a single trigonometric function.
Recall that \(A\sin x + B\cos x = R\sin(x + \alpha)\), where \(R = \sqrt{A^2 + B^2}\) and \(\tan \alpha = \frac{B}{A}\).
In our case, \(A = -\sqrt{36-4\sqrt{5}}\) and \(B = 2\), so:
\[
R = \sqrt{(-\sqrt{36-4\sqrt{5}})^2 + 2^2} = \sqrt{36-4\sqrt{5} + 4} = \sqrt{40-4\sqrt{5}} = 2\sqrt{10-\sqrt{5}}
\]
So, \(F_1(x) = 2\sqrt{10-\sqrt{5}} \sin(x + \alpha) + 2\).
The minimum value of \(\sin(x + \alpha)\) is \(-1\), so the minimum of \(F_1(x)\) is:
\[
2\sqrt{10-\sqrt{5}} \cdot (-1) + 2 = -2\sqrt{10-\sqrt{5}} + 2
\]
Now, let's find the minimum of the second factor:
\[
F_2(y) = 4 + 2\sqrt{10-\sqrt{5}} \cos y - 2\cos^2 y
\]
This is a quadratic in \(\cos y\). Let \(z = \cos y\), then:
\[
F_2(y) = -2z^2 + 2\sqrt{10-\sqrt{5}} z + 4
\]
This is a quadratic equation in \(z\), and since the coefficient of \(z^2\) is negative, it has a maximum value. However, we need the minimum value of \(F_2(y)\). The minimum will occur at the boundaries of \(z\), i.e., when \(z = 1\) or \(z = -1\).
Let's evaluate \(F_2(y)\) at \(z = 1\) and \(z = -1\).
At \(z = 1\):
\[
F_2(y) = -2(1)^2 + 2\sqrt{10-\sqrt{5}} (1) + 4 = -2 + 2\sqrt{10-\sqrt{5}} + 4 = 2 + 2\sqrt{10-\sqrt{5}}
\]
At \(z = -1\):
\[
F_2(y) = -2(-1)^2 + 2\sqrt{10-\sqrt{5}} (-1) + 4 = -2 - 2\sqrt{10-\sqrt{5}} + 4 = 2 - 2\sqrt{10-\sqrt{5}}
\]
Since \(\sqrt{10-\sqrt{5}} > 0\), the minimum value of \(F_2(y)\) is \(2 - 2\sqrt{10-\sqrt{5}}\).
Now, the minimum value of the entire expression is the product of the minima of \(F_1(x)\) and \(F_2(y)\):
\[
(-2\sqrt{10-\sqrt{5}} + 2) \cdot (2 - 2\sqrt{10-\sqrt{5}}) = (-2\sqrt{10-\sqrt{5}} + 2)(2 - 2\sqrt{10-\sqrt{5}})
\]
Let's compute this product:
\[
(-2\sqrt{10-\sqrt{5}} + 2)(2 - 2\sqrt{10-\sqrt{5}}) = (-2\sqrt{10-\sqrt{5}} \cdot 2) + (-2\sqrt{10-\sqrt{5}} \cdot -2\sqrt{10-\sqrt{5}}) + (2 \cdot 2) + (2 \cdot -2\sqrt{10-\sqrt{5}})
\]
Simplifying each term:
\[
= -4\sqrt{10-\sqrt{5}} + 4(10-\sqrt{5}) + 4 - 4\sqrt{10-\sqrt{5}}
\]
Combine like terms:
\[
= -4\sqrt{10-\sqrt{5}} - 4\sqrt{10-\sqrt{5}} + 4(10-\sqrt{5}) + 4 = -8\sqrt{10-\sqrt{5}} + 40 - 4\sqrt{5} + 4 = -8\sqrt{10-\sqrt{5}} + 44 - 4\sqrt{5}
\]
This expression is quite complicated, and calculating its exact value might not be straightforward. However, we can approximate the value.
First, let's approximate \(\sqrt{10 - \sqrt{5}}\). Since \(\sqrt{5} \approx 2.236\), we have:
\[
\sqrt{10 - 2.236} = \sqrt{7.764} \approx 2.786
\]
Similarly, \(\sqrt{5} \approx 2.236\).
Now, let's approximate the expression:
\[
-8 \cdot 2.786 + 44 - 4 \cdot 2.236 = -22.288 + 44 - 8.944 = 12.768
\]
Rounding this to the nearest integer gives us 13.
Wait, let's double-check the calculations to ensure accuracy.
First, let's compute \(\sqrt{10 - \sqrt{5}}\) more precisely.
\(\sqrt{5} \approx 2.2360679775\), so:
\[
\sqrt{10 - 2.2360679775} = \sqrt{7.7639320225} \approx 2.786
\]
Now, compute the expression:
\[
-8 \cdot 2.786 + 44 - 4 \cdot 2.236 = -22.288 + 44 - 8.944 = 12.768
\]
Rounding 12.768 to the nearest integer gives us 13.
Thus, the minimum value of the expression is approximately 13.
</think>
<answer>
\boxed{13}
</answer>
``` |
leehayley/leehayle | leehayley | 2025-04-28T07:48:18Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-28T07:48:18Z | ---
license: bigcode-openrail-m
---
|
tanthinhdt/Cytotoxicity-Nanoparticles_Llamma-3.1_20250428-025953 | tanthinhdt | 2025-04-28T07:46:29Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:skumar9/Llama-medx_v3.2",
"base_model:finetune:skumar9/Llama-medx_v3.2",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T19:59:57Z | ---
library_name: transformers
base_model: skumar9/Llama-medx_v3.2
tags:
- generated_from_trainer
model-index:
- name: Cytotoxicity-Nanoparticles_Llamma-3.1_20250428-025953
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Cytotoxicity-Nanoparticles_Llamma-3.1_20250428-025953
This model is a fine-tuned version of [skumar9/Llama-medx_v3.2](https://huggingface.co/skumar9/Llama-medx_v3.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 120.9954
- eval_r_squared: 0.88
- eval_matthews_correlation: 0.8320
- eval_accuracy: 0.9157
- eval_runtime: 18.347
- eval_samples_per_second: 9.702
- eval_steps_per_second: 4.851
- epoch: 50.0
- step: 39950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
gaianet/Qwen2.5-VL-32B-Instruct-GGUF | gaianet | 2025-04-28T07:45:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"en",
"base_model:Qwen/Qwen2.5-VL-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-04-28T02:06:23Z | ---
base_model: Qwen/Qwen2.5-VL-32B-Instruct
license: apache-2.0
model_creator: Qwen
model_name: Qwen2.5-VL-32B-Instruct
quantized_by: Second State Inc.
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
library_name: transformers
---
# Qwen2.5-VL-32B-Instruct-GGUF
## Original Model
[Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct)
## Run with Gaianet
**Prompt template:**
prompt template: `qwen2-vision`
**Context size:**
chat_ctx_size: `32000`
**Run with GaiaNet:**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b5196* |
mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF | mradermacher | 2025-04-28T07:41:10Z | 91 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:adamo1139/Yi-34B-200K-HESOYAM-TURTLE-0208-4CHAN",
"base_model:quantized:adamo1139/Yi-34B-200K-HESOYAM-TURTLE-0208-4CHAN",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-08-03T16:07:50Z | ---
base_model: adamo1139/Yi-34B-200K-HESOYAM-TURTLE-0208-4CHAN
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/adamo1139/Yi-34B-200K-HESOYAM-TURTLE-0208-4CHAN
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34B-200K-HESOYAM-TURTLE-0208-i1-GGUF/resolve/main/Yi-34B-200K-HESOYAM-TURTLE-0208.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
dgambettaphd/M_llm2_gen9_run0_X_doc1000_synt64_tot128_SYNLAST | dgambettaphd | 2025-04-28T07:40:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T07:39:52Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AhmedLet/Qwen_0.5_python_codes_mbpp_GGUF | AhmedLet | 2025-04-28T07:37:29Z | 102 | 1 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:google-research-datasets/mbpp",
"dataset:mlabonne/FineTome-100k",
"dataset:MohamedSaeed-dev/python_dataset_codes",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:quantized:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-28T08:41:56Z | ---
base_model:
- Qwen/Qwen2.5-0.5B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
datasets:
- google-research-datasets/mbpp
- mlabonne/FineTome-100k
- MohamedSaeed-dev/python_dataset_codes
---
# Uploaded model
- **Developed by:** AhmedLet
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
Triangle104/Qwen2.5-3B-Instruct-Q8_0-GGUF | Triangle104 | 2025-04-28T07:34:49Z | 2 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-19T17:00:14Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-3B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-3b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-3b-instruct-q8_0.gguf -c 2048
```
|
vmpsergio/0c7d804e-1cbb-49d2-ad96-764d7ff6f388 | vmpsergio | 2025-04-28T07:34:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Artples/L-MChat-7b",
"base_model:adapter:Artples/L-MChat-7b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T06:27:02Z | ---
library_name: peft
license: apache-2.0
base_model: Artples/L-MChat-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0c7d804e-1cbb-49d2-ad96-764d7ff6f388
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Artples/L-MChat-7b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 8fbfede0bde591f0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8fbfede0bde591f0_train_data.json
type:
field_input: tools
field_instruction: messages
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/0c7d804e-1cbb-49d2-ad96-764d7ff6f388
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/8fbfede0bde591f0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_turn|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3f64ba62-429f-4feb-bf49-ad91ba2d78f3
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 3f64ba62-429f-4feb-bf49-ad91ba2d78f3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0c7d804e-1cbb-49d2-ad96-764d7ff6f388
This model is a fine-tuned version of [Artples/L-MChat-7b](https://huggingface.co/Artples/L-MChat-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0351 | 0.0150 | 200 | 0.0270 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Qwen2.5-3B-Instruct-Q4_K_S-GGUF | Triangle104 | 2025-04-28T07:33:49Z | 4 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-19T16:26:18Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-3B-Instruct-Q4_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-3b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-3b-instruct-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-3b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q4_K_S-GGUF --hf-file qwen2.5-3b-instruct-q4_k_s.gguf -c 2048
```
|
DeepRetrieval/DeepRetrieval-PubMed-3B | DeepRetrieval | 2025-04-28T07:29:55Z | 37 | 3 | null | [
"safetensors",
"qwen2",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2503.00223",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:mit",
"region:us"
] | null | 2025-03-31T03:12:36Z | ---
license: mit
base_model:
- Qwen/Qwen2.5-3B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# DeepRetrieval
## Overview
DeepRetrieval is a novel approach that uses reinforcement learning (RL) to train Large Language Models (LLMs) for query generation without requiring supervised data. Instead of relying on expensive human-annotated or distilled reference queries, DeepRetrieval enables LLMs to learn through direct trial and error, using retrieval metrics as rewards.
## Key Features
- **No Supervision Required**: Eliminates the need for expensive human-annotated or distilled reference queries
- **RL-Based Framework**: Uses reinforcement learning to optimize query generation directly for retrieval performance
- **State-of-the-Art Performance**: Achieves remarkable results across diverse retrieval tasks
Please view our [GitHub page](https://github.com/pat-jj/DeepRetrieval) for instructions.
[DeepRetrieval Paper](arxiv.org/abs/2503.00223)
```
@article{jiang2025deepretrievalhackingrealsearch,
title={DeepRetrieval: Hacking Real Search Engines and Retrievers with Large Language Models via Reinforcement Learning},
author={Pengcheng Jiang and Jiacheng Lin and Lang Cao and Runchu Tian and SeongKu Kang and Zifeng Wang and Jimeng Sun and Jiawei Han},
year={2025},
journal = {arXiv preprint arXiv: 2503.00223},
url={https://arxiv.org/abs/2503.00223}
}
``` |
mlfoundations-dev/nemo_nano_0.3k | mlfoundations-dev | 2025-04-28T07:29:40Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T06:24:37Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: nemo_nano_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nemo_nano_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/nemo_nano_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Czer10000/llama3-seek-qlora | Czer10000 | 2025-04-28T07:21:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T03:30:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Samsoup/OLMo-2-1124-7B-ARC-Challenge-Instruct | Samsoup | 2025-04-28T07:20:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"olmo2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T07:17:54Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OpenMOSE/PRWKV-7-cxa076 | OpenMOSE | 2025-04-28T07:18:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T04:48:10Z | ---
license: apache-2.0
---
# **PRWKV-cxa076 – "Akemi" RWKV Model Series**
<div align="center">
<img src="./cxa076.png" style="border-radius: 15px; width: 60%; height: 60%; object-fit: cover; box-shadow: 10px 10px 20px rgba(0, 0, 0, 0.5); border: 2px solid white;" alt="PRWKV" />
</div>
---
## **Model Overview**
**PRWKV** stands for **Passion RWKV** – a model series born from relentless experimentation, unyielding dedication, and the burning question:
> **Can an RNN truly stand shoulder-to-shoulder with Transformer Attention?**
This project explores the boundaries of **RWKV architecture**, replacing the traditional **Transformer Attention blocks** with **TimeMix**, an RNN-based mechanism, while distilling knowledge from Transformer giants.
The PRWKV models range from **3B to 14B parameters**, showcasing the potential scalability of RNN-based language models in modern LLM landscapes.
---
## **Project Objective**
The **sole purpose** of this project was to **test the feasibility of replacing Transformer Attention with RNN-based TimeMix**.
- **No shortcuts**.
- **No compromises**.
- Just pure **architectural curiosity** driven by **Passion**.
---
## **Technical Challenges & Triumphs**
### 🔥 **Distillation from Transformers**
- The models were distilled from high-quality **Transformer-based teachers** using **Grouped Query Attention (GQA)**.
- The **TimeMix** blocks were heavily customized to align with the semantics of Attention layers.
- Special care was taken to **inherit weight structures** from the teacher's **Receptance, Key, Value, and Output layers**, enabling smoother early-stage learning.
### ⚡ **Key Innovations**
- **RepeatKV mechanism**: Introduced for more stable group-based key-value projection.
- **GroupNorm vs NoNorm**: Extensive experiments revealed that sometimes **removing normalization** enhanced long-context stability.
---
### 📈 **Scaling Observations**
- PRWKV scales from **3B** to **14B** parameters.
- **14B KD** runs achieved **KL divergence < 0.1**, proving **RNN TimeMix blocks can indeed mimic Transformer Attention** at high fidelity.
- However, **Ctx expansion** beyond 2048 remained an ongoing challenge due to gradient instability in larger models.
---
## **Limitations**
- The models are still under development and primarily serve **as a proof-of-concept**.
- Long-context (4096+) stability varies based on model size and requires further refinement.
- Knowledge distillation was the core training method; no large-scale SFT was applied yet.
---
---
# **A Poem of Passion**
> **In the depths of night, when GPUs hum soft,
> A fire ignites, a dream aloft.**
>
> **To mold an RNN with TimeMix bright,
> And rival Attention’s daunting might.**
>
> **Through spikes and crashes, I pressed on,
> A madman's code, from dusk till dawn.**
>
> **Not for glory, nor for gold,
> But just to see: can TimeMix hold?**
>
> **And when the losses dipped so low,
> The pulse of passion dared to grow.**
>
> **PRWKV, a name of flame,
> Not just a model – but a claim.**
>
> **That in this dance of gates and states,
> Passion alone rewrites the fates.**
>
> **So here's my heart, in code and rhyme,
> RNNs reborn, beyond their time.**
---
🔥 **PRWKV is more than an experiment – it is a testament to Passion.** 🔥
Scalability test from small to large
ToDo for Me:
Qwen 2.5 14B
Qwen 2.5 7B
Qwen 2.5 3B
Phi-4 14B
Phi-4-mini 3.8B
Gemma 3 12B
Gemma 3 4B
Architecture: RWKV cxa076 (RWKV x070 based)
Now supported only in RWKV-Infer.
```
curl http://127.0.0.1:9000/loadmodel -X POST -H "Content-Type: application/json" -d '{"model_filename":"models/PRWKV7-cxa076-qwen3b-stage2final-ctx2048.pth","model_viewname":"PRWKV7-cxa076 Qwen 2.5 3B Stage2 FP8","model_strategy":"fp8", "template":"qwen", "endtoken":"<|im_end|>","default_temperature":"1.0", "default_top_p":"0.3"}'
``` |
ramcargpt/BatQwen2.5-Big-38K | ramcargpt | 2025-04-28T07:17:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-3B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-3B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T07:12:33Z | ---
base_model: unsloth/Qwen2.5-3B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ramcargpt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
devika12312/fine-tuned-meta-llama | devika12312 | 2025-04-28T07:16:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-01T06:37:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Prerana9/q-FrozenLake-v1-4x4-noSlippery | Prerana9 | 2025-04-28T07:15:04Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T07:15:01Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Prerana9/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Kenazin/Llama-3.1-8B-peft-p-tuning-v3-7 | Kenazin | 2025-04-28T07:03:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T07:02:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
danyush/qwen2.5_vl-7b_virat_lr_bs_shuffled_p | danyush | 2025-04-28T06:59:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-28T06:45:12Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ricemonster/gpt2-large-sft | ricemonster | 2025-04-28T06:56:34Z | 5 | 0 | null | [
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T05:19:13Z | ---
license: apache-2.0
---
|
luckeciano/Qwen-2.5-7B-RL-LACPO-NoBaselineAlpha1 | luckeciano | 2025-04-28T06:25:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T15:14:48Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-RL-LACPO-NoBaselineAlpha1
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-RL-LACPO-NoBaselineAlpha1
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-RL-LACPO-NoBaselineAlpha1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/MaxEntLLMs/runs/f5yy5fcj)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Natures1402/Nourix | Natures1402 | 2025-04-28T06:19:03Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-28T06:15:22Z | # Nourix Danmark Anmeldelser, Officiel Hjemmeside, Pris, Bestil Nu | Nourix
Nourix er et førsteklasses kosttilskud designet til at understøtte bæredygtig vægtkontrol gennem en kraftfuld blanding af naturlige ingredienser. Nourix er fremstillet til at forbedre stofskiftet, dæmpe appetitten, fremme fedtstofskiftet og øge energien og tilbyder en holistisk tilgang til at opnå en sund kropssammensætning.
## **[Klik her for at bestille fra Nourix' officielle hjemmeside](https://nourix.space)**
Ingefær (Zingiber Officinale): Ingefærens gingerolindhold bidrager til dens termogene virkninger. En anmeldelse fra 2017 i Critical Reviews in Food Science and Nutrition antydede, at ingefær kan øge stofskiftet og reducere appetitten, selvom resultaterne i humane studier er inkonsistente.
Kanel: Kanel er kendt for sine blodsukkerregulerende egenskaber og hjælper med at dæmpe sukkertrang. En undersøgelse fra 2017 i Metabolism rapporterede forbedret insulinfølsomhed hos overvægtige personer med kaneltilskud.
Bitter appelsin (Citrus Aurantium): Indeholder synephrin, et naturligt stimulerende middel. En undersøgelse fra 2011 i International Journal of Medical Sciences indikerede, at synephrin øger stofskiftet og fedtforbrændingen, selvom dens virkninger er moderate.
Hindbærketoner: Hindbærketoner promoveres til fedtforbrænding, men mangler robuste menneskelige beviser. En undersøgelse fra 2013 i Life Sciences viste potentiale hos dyr, men humane studier er ufyldestgørende.
Cayennepeber (Capsaicin): Capsaicin forbedrer termogenese og appetitnedsættelse. Et studie fra 2014 i Appetite viste reduceret kalorieindtag og øget fedtoxidation med capsaicin.
Chrompicolinat: Dette spormineral forbedrer insulinfølsomheden og kan reducere kulhydrattrang. En metaanalyse fra 2013 i Obesity Reviews fandt beskedne fordele ved vægttab.
Ginseng: Ginseng er et adaptogen, der øger energi og reducerer træthed. Et studie fra 2018 i Journal of Ginseng Research forbandt ginseng med විශ්වාසයි: Et studie fra 2018 i Journal of Ginseng Research forbandt ginseng med forbedrede metaboliske markører hos overvægtige personer.
B-vitaminer (B6, B12): B-vitaminer er essentielle for energimetabolisme, bekæmper træthed og understøtter en aktiv livsstil. En anmeldelse fra 2016 i Nutrients fremhævede deres rolle i at forhindre metabolisk afmatning på grund af mangler.
Denne kombination skaber en synergistisk effekt, der er rettet mod termogenese, fedtstofskifte, appetitkontrol og energiproduktion. Nourix er fri for GMO'er, kunstige tilsætningsstoffer og allergener som gluten eller soja og appellerer til dem, der prioriterer rene, naturlige kosttilskud.
## **[Klik her for at bestille fra Nourix' officielle hjemmeside](https://nourix.space)** |
Little-Girl-Baby-Viral-Video-Link/Little.Girl.Viral.Video.Leaks.Tutorial | Little-Girl-Baby-Viral-Video-Link | 2025-04-28T06:18:37Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-28T06:17:38Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5c7e9ctm?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Introduction – What is the "Little Girl Viral Video"?
In the digital era, viral videos capture the hearts of millions, spreading rapidly across social media platforms. The "Little Girl Viral Video" refers to a trending clip featuring a young girl whose adorable actions, words, or reactions captivate viewers worldwide. These videos often evoke emotions, from laughter to admiration, making them widely shared and discussed online. |
Subsets and Splits