modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-23 18:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-23 18:25:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
DebateLabKIT/Qwen2.5-Argunaut-1-1.5B-SFT | DebateLabKIT | 2025-04-28T10:51:48Z | 42 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"logic",
"argumentation",
"critical-thinking",
"argument-mapping",
"trl",
"sft",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:DebateLabKIT/deepa2-conversations",
"dataset:DebateLabKIT/deep-argmap-conversations",
"dataset:allenai/tulu-3-sft-mixture",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-28T13:57:42Z | ---
model_name: Qwen2.5-Argunaut-1-1.5B-SFT
license: apache-2.0
datasets:
- DebateLabKIT/deepa2-conversations
- DebateLabKIT/deep-argmap-conversations
- allenai/tulu-3-sft-mixture
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- logic
- argumentation
- critical-thinking
- argument-mapping
- trl
- sft
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for Qwen2.5-Argunaut-1-1.5B-SFT
🧪 _Experimental, not recommended for use in teaching._
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
📘 [HF Blog Article](https://huggingface.co/blog/ggbetz/argunauts-phase-1)
## Quick start
```python
from transformers import pipeline
question = "Are you familiar with Argdown syntax? What's its purpose?"
generator = pipeline("text-generation", model="DebateLabKIT/Llama-3.1-Argunaut-1-8B-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Evaluation
### Chat Experience
_coming soon_
### Metrics
_coming soon_
## SFT dataset mixture
|Dataset|Weight (examples)|Weight (tokens)|
|:------|:----:|:----:|
|DebateLabKIT/deepa2-conversations|25%|49%|
|DebateLabKIT/deep-argmap-conversations|25%|18%|
|allenai/tulu-3-sft-mixture|50%|33%|
## Training procedure
Trained with SFT on **1M examples** and for 1 epoch with
* context length 8196
* packing (trl implementation)
```yaml
# Training parameters
num_train_epochs: 1
per_device_train_batch_size: 32
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
learning_rate: 5.0e-6
lr_scheduler_type: cosine
warmup_ratio: 0.1
```
Hardware: 4 x H100 GPUs.
_This work was performed on the HoreKa supercomputer funded by the
Ministry of Science, Research and the Arts Baden-Württemberg and by
the Federal Ministry of Education and Research._
### Framework versions
- TRL: 0.14.0
- Transformers: 4.46.3
- Pytorch: 2.4.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Credits
This work wouldn't be possible without all the **great contributions from the open LLM community**. Thank you! Special kudos go to
- @philschmid for his latest [fine-tuning boilerplate](https://www.philschmid.de/fine-tune-llms-in-2025)
- @lvwerra, @lewtun et al for building and maintaining [trl](https://github.com/huggingface/trl)
- @cognitivecomputations for sharing [spectrum](https://github.com/cognitivecomputations/spectrum/tree/main)
- @allenai for releasing [tulu-3-sft-mixture](https://huggingface.co/datasets/allenai/tulu-3-sft-mixture)
- @qwen for building [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) |
mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF | mradermacher | 2025-04-28T10:51:32Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.x_70b_Dolmen_v1.2",
"base_model:quantized:Nexesenex/Llama_3.x_70b_Dolmen_v1.2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T04:35:49Z | ---
base_model: Nexesenex/Llama_3.x_70b_Dolmen_v1.2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.x_70b_Dolmen_v1.2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.2-i1-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Triangle104/Athena-3.5-3B-Q4_K_S-GGUF | Triangle104 | 2025-04-28T10:49:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"base_model:Spestly/Athena-3.5-3B",
"base_model:quantized:Spestly/Athena-3.5-3B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T10:49:04Z | ---
base_model: Spestly/Athena-3.5-3B
library_name: transformers
tags:
- unsloth
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# Triangle104/Athena-3.5-3B-Q4_K_S-GGUF
This model was converted to GGUF format from [`Spestly/Athena-3.5-3B`](https://huggingface.co/Spestly/Athena-3.5-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Spestly/Athena-3.5-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Athena-3.5-3B-Q4_K_S-GGUF --hf-file athena-3.5-3b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Athena-3.5-3B-Q4_K_S-GGUF --hf-file athena-3.5-3b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Athena-3.5-3B-Q4_K_S-GGUF --hf-file athena-3.5-3b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Athena-3.5-3B-Q4_K_S-GGUF --hf-file athena-3.5-3b-q4_k_s.gguf -c 2048
```
|
SmailOmar05/Smailomar | SmailOmar05 | 2025-04-28T10:49:04Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T10:49:04Z | ---
license: apache-2.0
---
|
Rinnnt/Reinforce-Pixelcopter-PLE-v0 | Rinnnt | 2025-04-28T10:47:35Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T10:47:27Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 19.80 +/- 14.69
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
KingNish/Qwen2.5-0.5b-Test-ft | KingNish | 2025-04-28T10:45:15Z | 2,846 | 10 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-26T16:46:30Z | ---
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Qwen 2.5 0.5B Model
## Model Description
This model is a compact yet powerful language model trained to answer a variety of questions with impressive quality. Despite its smaller size, it has demonstrated performance comparable to Llama 3.2 1B, and in some cases, it even outperforms it. This model was specifically trained on a 12,800 rows of the Magpie 300k Dataset.
## Performance
The Qwen 2.5 model has shown promising results in various tests, including the "strawberry test, Decimal Comparison test" where it successfully provided accurate answers. However, it is important to note that, like many models of its size, it may occasionally produce incorrect answers or flawed reasoning. Continuous improvements and full training are planned to enhance its performance further.
## How to Use
To use the Qwen 2.5 model, you can load it using the Hugging Face Transformers library. Here’s a simple example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "KingNish/Qwen2.5-0.5b-Test-ft"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Which is greater 9.9 or 9.11 ??"
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Future Work
I am actively working on improving the Qwen 2.5 model by training it on a larger dataset.
# Uploaded model
- **Developed by:** KingNish
- **License:** apache-2.0
- **Finetuned from model :** Qwen/Qwen2.5-0.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
mradermacher/PR2-14B-Instruct-GGUF | mradermacher | 2025-04-28T10:45:02Z | 57 | 1 | transformers | [
"transformers",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:qingy2024/PR2-SFT",
"base_model:qingy2024/PR2-14B-Instruct",
"base_model:quantized:qingy2024/PR2-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-07T00:07:40Z | ---
base_model: qingy2024/PR2-14B-Instruct
datasets:
- qingy2024/PR2-SFT
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/qingy2024/PR2-14B-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PR2-14B-Instruct-GGUF/resolve/main/PR2-14B-Instruct.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/PR2-14B-Instruct-GGUF/resolve/main/PR2-14B-Instruct.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/PR2-14B-Instruct-GGUF/resolve/main/PR2-14B-Instruct.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PR2-14B-Instruct-GGUF/resolve/main/PR2-14B-Instruct.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/PR2-14B-Instruct-GGUF/resolve/main/PR2-14B-Instruct.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/PR2-14B-Instruct-GGUF/resolve/main/PR2-14B-Instruct.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PR2-14B-Instruct-GGUF/resolve/main/PR2-14B-Instruct.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PR2-14B-Instruct-GGUF/resolve/main/PR2-14B-Instruct.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/PR2-14B-Instruct-GGUF/resolve/main/PR2-14B-Instruct.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/PR2-14B-Instruct-GGUF/resolve/main/PR2-14B-Instruct.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PR2-14B-Instruct-GGUF/resolve/main/PR2-14B-Instruct.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
KingNish/Reasoning-0.5b | KingNish | 2025-04-28T10:44:50Z | 220 | 30 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"reasoning",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:KingNish/reasoning-base-20k",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-05T16:29:14Z | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
datasets:
- KingNish/reasoning-base-20k
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
- reasoning
---
# Model Dexcription
It's First iteration of this model. For testing purpose its just trained on 10k rows.
It performed very well than expected. It do first reasoning and than generate response on based on it but it do like o1.
It do reasoning separately no special tokens or in response reasoning.
Below is inference code.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MAX_REASONING_TOKENS = 1024
MAX_RESPONSE_TOKENS = 512
model_name = "KingNish/Reasoning-0.5b"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Which is greater 9.9 or 9.11 ??"
messages = [
{"role": "user", "content": prompt}
]
# Generate reasoning
reasoning_template = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True)
reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device)
reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=MAX_REASONING_TOKENS)
reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)
# print("REASONING: " + reasoning_output)
# Generate answer
messages.append({"role": "reasoning", "content": reasoning_output})
response_template = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response_inputs = tokenizer(response_template, return_tensors="pt").to(model.device)
response_ids = model.generate(**response_inputs, max_new_tokens=MAX_RESPONSE_TOKENS)
response_output = tokenizer.decode(response_ids[0, response_inputs.input_ids.shape[1]:], skip_special_tokens=True)
print("ANSWER: " + response_output)
```
- **Trained by:** [Nishith Jain](https://huggingface.co/KingNish)
- **License:** apache-2.0
- **Finetuned from model :** [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)
- **Dataset used :** [KingNish/reasoning-base-20k](https://huggingface.co/datasets/KingNish/reasoning-base-20k)
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF | mradermacher | 2025-04-28T10:44:34Z | 128 | 1 | transformers | [
"transformers",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:Orion-zhen/meissa-unalignments",
"base_model:Orion-zhen/Qwen2.5-14B-Instruct-Uncensored",
"base_model:quantized:Orion-zhen/Qwen2.5-14B-Instruct-Uncensored",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-07T10:02:52Z | ---
base_model: Orion-zhen/Qwen2.5-14B-Instruct-Uncensored
datasets:
- Orion-zhen/meissa-unalignments
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: gpl-3.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Orion-zhen/Qwen2.5-14B-Instruct-Uncensored
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF | mradermacher | 2025-04-28T10:44:30Z | 287 | 1 | transformers | [
"transformers",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:Orion-zhen/meissa-unalignments",
"base_model:Orion-zhen/Qwen2.5-14B-Instruct-Uncensored",
"base_model:quantized:Orion-zhen/Qwen2.5-14B-Instruct-Uncensored",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-07T10:18:31Z | ---
base_model: Orion-zhen/Qwen2.5-14B-Instruct-Uncensored
datasets:
- Orion-zhen/meissa-unalignments
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: gpl-3.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Orion-zhen/Qwen2.5-14B-Instruct-Uncensored
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-Uncensored-i1-GGUF/resolve/main/Qwen2.5-14B-Instruct-Uncensored.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
murad556/8888888 | murad556 | 2025-04-28T10:44:16Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T10:44:15Z | ---
license: apache-2.0
---
|
TSxZZNJZua/TSxZZNJZua | TSxZZNJZua | 2025-04-28T10:42:28Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-04-28T10:42:28Z | ---
license: creativeml-openrail-m
---
|
shifz/novatechbackend | shifz | 2025-04-28T10:40:26Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-04-28T10:31:35Z | ---
license: mit
---
# NovaTech Backend
A FastAPI backend chatbot service for NovaTech Solutions 🚀
|
nqdhocai/LogicLlama-3.2-3B-MALLS-v0 | nqdhocai | 2025-04-28T10:36:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T10:34:56Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nqdhocai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ob238ZXSYn/ob238ZXSYn | ob238ZXSYn | 2025-04-28T10:35:16Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-28T10:35:14Z | ---
license: bigcode-openrail-m
---
|
kavlab/qwen2.5-1.5b-inst-text-to-sql-ru | kavlab | 2025-04-28T10:35:09Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T17:23:44Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: qwen2.5-1.5b-inst-text-to-sql-ru
tags:
- generated_from_trainer
- trl
- sft
licence: license
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for qwen2.5-1.5b-inst-text-to-sql-ru
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kavlab/qwen2.5-1.5b-inst-text-to-sql-ru", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Vapor_v2_7B-GGUF | mradermacher | 2025-04-28T10:34:44Z | 156 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:FourOhFour/Vapor_v2_7B",
"base_model:quantized:FourOhFour/Vapor_v2_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-13T10:26:18Z | ---
base_model: FourOhFour/Vapor_v2_7B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/FourOhFour/Vapor_v2_7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Netlive/ModernBertModel_DE_FL_BS | Netlive | 2025-04-28T10:31:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-28T10:31:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/GLM-Z1-9B-0414-Q6_K-GGUF | Triangle104 | 2025-04-28T10:31:30Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:THUDM/GLM-Z1-9B-0414",
"base_model:quantized:THUDM/GLM-Z1-9B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T10:24:14Z | ---
base_model: THUDM/GLM-Z1-9B-0414
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/GLM-Z1-9B-0414-Q6_K-GGUF
This model was converted to GGUF format from [`THUDM/GLM-Z1-9B-0414`](https://huggingface.co/THUDM/GLM-Z1-9B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-9B-0414) for more details on the model.
---
Introduction
-
The GLM family welcomes a new generation of open-source models, the GLM-4-32B-0414
series, featuring 32 billion parameters. Its performance is comparable
to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very
user-friendly local deployment features. GLM-4-32B-Base-0414 was
pre-trained on 15T of high-quality data, including a large amount of
reasoning-type synthetic data, laying the foundation for subsequent
reinforcement learning extensions. In the post-training stage, in
addition to human preference alignment for dialogue scenarios, we also
enhanced the model's performance in instruction following, engineering
code, and function calling using techniques such as rejection sampling
and reinforcement learning, strengthening the atomic capabilities
required for agent tasks. GLM-4-32B-0414 achieves good results in areas
such as engineering code, Artifact generation, function calling,
search-based Q&A, and report generation. Some benchmarks even rival
larger models like GPT-4o and DeepSeek-V3-0324 (671B).
GLM-Z1-9B-0414 is a surprise. We employed the
aforementioned series of techniques to train a 9B small-sized model that
maintains the open-source tradition. Despite its smaller scale,
GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical
reasoning and general tasks. Its overall performance is already at a
leading level among open-source models of the same size. Especially in
resource-constrained scenarios, this model achieves an excellent balance
between efficiency and effectiveness, providing a powerful option for
users seeking lightweight deployment
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/GLM-Z1-9B-0414-Q6_K-GGUF --hf-file glm-z1-9b-0414-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/GLM-Z1-9B-0414-Q6_K-GGUF --hf-file glm-z1-9b-0414-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/GLM-Z1-9B-0414-Q6_K-GGUF --hf-file glm-z1-9b-0414-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/GLM-Z1-9B-0414-Q6_K-GGUF --hf-file glm-z1-9b-0414-q6_k.gguf -c 2048
```
|
10Prem09/finetuned_Qwen2.5_Coder_0.5B_Instruct | 10Prem09 | 2025-04-28T10:19:02Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"code",
"text2text-generation",
"base_model:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | text2text-generation | 2025-04-28T06:27:47Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-Coder-0.5B-Instruct
pipeline_tag: text2text-generation
tags:
- code
---
# 🧠 Fine-Tuned Qwen 2.5 Coder: Python Data Engineering Assistant
## 📌 Model Overview
This model is a fine-tuned version of **Qwen2.5-Coder-0.5B-Instruct**, adapted specifically to write clean, structured Python code for **data engineering and data transformation tasks**. It is especially effective for single-step operations such as joining datasets, handling quarters, replacing null values, and returning structured output.
Fine-tuned by **S Prem Kaushik**, this model is optimized for precision, clean code generation, and adherence to Pythonic best practices.
---
## 🎯 Objective
This model consistently follows best practices in data transformation, including:
- ✅ **Column Collision Handling**: Automatically applies `remove_column_collisions()` after joins.
- 📅 **Quarter & Date Handling**: Uses fiscal quarter mapping from a configurable dictionary.
- 🧼 **NaT/NaN Replacement**: Replaces `NaT` and `Nan` with Python `None`.
- 📦 **Function-Scoped Imports**: All `import` statements are inside functions.
- 📤 **Input/Output Structure**: Returns results as structured dictionaries using `.to_dict('records')`.
---
## 🧪 Training Data
- **Format**: JSONL with `system`, `query`, and `response` entries
- **Domain**: Realistic one-step data manipulation tasks in pandas
- **Coverage**: Merging, joining, null replacement, quarter logic, grouping, etc.
- **Author**: S Prem Kaushik
---
## 🛠 Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the model
model = AutoModelForCausalLM.from_pretrained("10Prem09/finetuned_Qwen2.5_Coder_0.5B_Instruct", torch_dtype=torch.float16).cuda()
tokenizer = AutoTokenizer.from_pretrained("10Prem09/finetuned_Qwen2.5_Coder_0.5B_Instruct")
model.eval()
# Define prompt using raw role tags (no chat template)
system_prompt = "Role: Python Code Generator."
user_prompt = "Merge employee and department data, then filter by department to calculate the average salary for each employee."
prompt = f"<|system|>\n{system_prompt}\n<|user|>\n{user_prompt}\n<|assistant|>\n"
# Tokenize input
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
# Generate response
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=300, # Ideally can be varied between 300 and 512.
repetition_penalty=1.3, # Can be changed from 1.2 to 1.4 to get ideal results.
pad_token_id=tokenizer.eos_token_id
)
# Decode and print result
generated_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_code)
```
---
## 📊 Sample Dataset Entries
### Example 1: Campaign ROI by Quarter with Column Collision Handling
```json
{
"system": "Role: Python Code Generator...",
"query": "Analyze marketing campaign ROI by fiscal quarters, handling column collisions with a left join.",
"response": "```python\ndef campaign_roi_by_quarter(campaign_df, spending_df, fiscal_config):\n import pandas as pd\n import numpy as np\n campaign_df['launch_date'] = pd.to_datetime(campaign_df['launch_date'])\n campaign_df['year'] = campaign_df['launch_date'].dt.year\n campaign_df['month'] = campaign_df['launch_date'].dt.month\n\n def assign_quarter(row):\n for q, (start, end) in fiscal_config.items():\n m = row['month']\n if start > end:\n if m >= start or m <= end:\n return f\"Q{q} {row['year'] if m >= start else row['year']-1}\"\n elif start <= m <= end:\n return f\"Q{q} {row['year']}\"\n\n campaign_df['quarter'] = campaign_df.apply(assign_quarter, axis=1)\n merged_df = pd.merge(campaign_df, spending_df, on='campaign_id', how='left', suffixes=('', '_y'))\n merged_df = remove_column_collisions(merged_df, 'left')\n merged_df['roi'] = merged_df['revenue_generated'] / merged_df['spending']\n result = merged_df.groupby('quarter')['roi'].mean().reset_index()\n result.replace({pd.NaT: None, np.nan: None}, inplace=True)\n return result.to_dict('records')\n\nfiscal_config = {1: (7,9), 2: (10,12), 3: (1,3), 4: (4,6)}\nresult = campaign_roi_by_quarter(source1, source2, fiscal_config)\n```"
}
```
### Example 2: Supplier Deliveries by Fiscal Quarter
```json
{
"system": "Role: Python Code Generator...",
"query": "Track supplier deliveries by fiscal quarters: Q1 (March-May), Q2 (June-August), Q3 (September-November), Q4 (December-February).",
"response": "```python\ndef deliveries_by_quarter(deliveries_df, fiscal_config):\n import pandas as pd\n import numpy as np\n deliveries_df['delivery_date'] = pd.to_datetime(deliveries_df['delivery_date'])\n deliveries_df['year'] = deliveries_df['delivery_date'].dt.year\n deliveries_df['month'] = deliveries_df['delivery_date'].dt.month\n\n def assign_quarter(row):\n for q, (start, end) in fiscal_config.items():\n m = row['month']\n if start > end:\n if m >= start or m <= end:\n return f\"Q{q} {row['year'] if m >= start else row['year']-1}\"\n elif start <= m <= end:\n return f\"Q{q} {row['year']}\"\n\n deliveries_df['quarter'] = deliveries_df.apply(assign_quarter, axis=1)\n result = deliveries_df.groupby('quarter').size().reset_index(name='delivery_count')\n result.replace({pd.NaT: None, np.nan: None}, inplace=True)\n return result.to_dict('records')\n\nfiscal_config = {1: (3, 5), 2: (6, 8), 3: (9, 11), 4: (12, 2)}\nresult = deliveries_by_quarter(source1, fiscal_config)\n```"
}
```
### Example 3: Join Project and Employee Tables
```json
{
"system": "Role: Python Code Generator. Generate clean, function-based Python code based on user requests.",
"query": "Find all projects and their associated manager names by joining project and employee data.",
"response": "```python\ndef get_project_manager_info(projects_df, employees_df):\n import pandas as pd\n import numpy as np\n\n merged = projects_df.merge(employees_df, left_on='manager_id', right_on='employee_id', how='left')\n merged = remove_column_collisions(merged, 'left')\n\n result = merged[['project_name', 'manager_name']]\n result.replace({pd.NaT: None, np.nan: None}, inplace=True)\n\n return result.to_dict('records')\n\nresult = get_project_manager_info(source1, source2)\n```"
}
```
---
## 📦 Model Details
- **Base Model**: Qwen2.5-Coder-0.5B-Instruct
- **Fine-Tuned By**: S Prem Kaushik
- **Specialization**: Python data manipulation for ETL, reporting, and time-based aggregation
---
## 🛡️ Limitations
- Designed for single-step transformations; complex pipelines should be modularized.
- Assumes remove_column_collisions() is available in the environment.
- Chat-style prompt formatting is recommended for best results.
## 📬 Contact & Dataset Access
If you are interested in accessing the fine-tuning dataset, reviewing the training code, or exploring potential collaborative opportunities, you are welcome to reach out.
Please contact me via my Hugging Face profile:
🔗 https://huggingface.co/10Prem09
Additional contact links (e.g., GitHub or LinkedIn) are available on my profile page. |
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_fruits_vegetables_d_outcome_only_0_25_MC | gradientrouting-spar | 2025-04-28T10:17:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-22T15:46:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
YLR7hMQppf7/YLR7hMQppf7 | YLR7hMQppf7 | 2025-04-28T10:13:34Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-04-28T10:13:34Z | ---
license: bsd-3-clause
---
|
Sameer2407/PriceLLaMAA-2025-04-28_07.20.50 | Sameer2407 | 2025-04-28T10:11:21Z | 0 | 1 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:ed-donner/pricer-data",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | null | 2025-04-28T07:25:46Z | ---
library_name: peft
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: PriceLLaMAA-2025-04-28_07.20.50
results: []
datasets:
- ed-donner/pricer-data
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sameer2001-poornima-university/PriceLLaMAA/runs/yswmgs4h)
# PriceLLaMAA-2025-04-28_07.20.50
This repository contains a fine-tuned LLaMa model for predicting product prices based on descriptions. It's trained using the ed-donner/pricer-data dataset and the trl library for Supervised Fine-Tuning (SFT) with LoRA (Low-Rank Adaptation).
## Model description
The base model used is meta-llama/Meta-Llama-3.1-8B. It's quantized to 4 bits using bitsandbytes for memory efficiency. The model is fine-tuned using LoRA, targeting specific layers (q_proj, v_proj, k_proj, o_proj) for efficient adaptation.
## Intended Uses
- **Price Prediction:**
The model is designed to predict or estimate the price of a product based on its textual description.
- **E-commerce Applications:**
Can be used by online sellers, marketplaces, or catalog management systems to suggest initial pricing based on product descriptions.
- **Data Augmentation:**
Helpful for generating synthetic price labels for datasets during training of other machine learning models.
- **Market Research:**
Can assist analysts in comparing how similar product descriptions could correlate with price estimates.
---
## Limitations
- **Domain-Specific:**
The model is trained primarily on e-commerce-style product descriptions. It may not perform well outside typical retail scenarios (e.g., luxury items, collectibles, services).
- **No Real-Time Market Awareness:**
The model does not have access to real-time pricing, supply-demand factors, or current market trends.
- **Approximate Predictions:**
Outputs are estimates based on learned patterns in the training data and are not guaranteed to be accurate for production financial decisions.
- **Bias from Training Data:**
If the training dataset contains biases (e.g., certain product categories being overpriced/underpriced), the model may inherit those biases.
- **Language and Format Sensitivity:**
Descriptions that are extremely short, poorly written, or in languages/formats very different from the training data may yield poor predictions.
---
## Training Details
- *Dataset:* ed-donner/pricer-data
- *Base Model:* meta-llama/Meta-Llama-3.1-8B
- *Quantization:* 4-bit NF4
- *Fine-tuning Method:* LoRA with SFT
- *Library:* trl
- *Hyperparameters:* See the training script in the repository for detailed hyperparameter values.
## Training Procedure
The model was fine-tuned using **Supervised Fine-Tuning (SFT)** combined with **LoRA** for parameter-efficient adaptation. The base model `meta-llama/Meta-Llama-3.1-8B` was loaded in 4-bit precision to optimize memory usage.
The training steps were:
1. **Model Preparation:**
- Loaded the base model (`Meta-Llama-3.1-8B`) in 4-bit NF4 quantization using `bitsandbytes`.
- Applied a LoRA configuration targeting the following modules:
- `q_proj`
- `k_proj`
- `v_proj`
- `o_proj`
2. **Dataset:**
- Used the `ed-donner/pricer-data` dataset, which consists of product descriptions and corresponding prices.
3. **Training Setup:**
- Fine-tuned using the `trl` library's SFTTrainer.
- Optimizer: `PagedAdamW` with betas=(0.9, 0.999) and epsilon=1e-08.
- Learning rate scheduler: Cosine decay schedule with 3% warmup ratio.
- Random seed: 42 for reproducibility.
4. **Hyperparameters:**
- Learning Rate: 1e-4
- Training Batch Size: 2
- Evaluation Batch Size: 1
- Number of Epochs: 1
5. **Monitoring:**
- Tracked training loss and evaluation metrics using Weights & Biases (wandb).
6. **Saving:**
- Only the LoRA adapters were saved, keeping the base model frozen to ensure lightweight deployment.
The entire training was optimized for fast prototyping and low GPU memory usage without sacrificing too much performance.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
## Demo Usage
You can use the model for inference like this:
```python
from transformers import AutoModelForCausalLM
from peft import PeftModel
import torch
# Load the base model (Meta-Llama-3.1-8B)
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3.1-8B")
# Load the fine-tuned model with PEFT
model_name = "Sameer2407/PriceLLaMAA-2025-04-28_07.20.50" # Replace with your model path
model = PeftModel.from_pretrained(base_model, model_name)
# Load the tokenizer
tokenizer = base_model.get_tokenizer()
# Define a product description
product_description = "A sleek, modern stainless steel electric kettle with 1.5-liter capacity and auto shut-off feature."
# Prepare input
inputs = tokenizer(f"Predict the price: {product_description}", return_tensors="pt").to(model.device)
# Generate output
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=50)
# Decode and print the predicted price
predicted_price = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(predicted_price)
|
maksf8486/58036450-e5b9-4b3c-ba77-dea309dfca50 | maksf8486 | 2025-04-28T10:07:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T09:56:05Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 58036450-e5b9-4b3c-ba77-dea309dfca50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b04b6110aa36d62e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b04b6110aa36d62e_train_data.json
type:
field_input: original_version
field_instruction: title
field_output: french_version
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: false
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: maksf8486/58036450-e5b9-4b3c-ba77-dea309dfca50
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/b04b6110aa36d62e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 90278a8d-6dbd-4954-a0a1-d18084d85b28
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 90278a8d-6dbd-4954-a0a1-d18084d85b28
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 58036450-e5b9-4b3c-ba77-dea309dfca50
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5558 | 0.0171 | 200 | 1.4409 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
THUDM/GLM-Z1-9B-0414 | THUDM | 2025-04-28T10:07:32Z | 3,456 | 55 | transformers | [
"transformers",
"safetensors",
"glm4",
"text-generation",
"conversational",
"zh",
"en",
"arxiv:2406.12793",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-08T06:39:51Z | ---
license: mit
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
---
# GLM-4-Z1-9B-0414
## Introduction
The GLM family welcomes a new generation of open-source models, the **GLM-4-32B-0414** series, featuring 32 billion parameters. Its performance is comparable to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including a large amount of reasoning-type synthetic data, laying the foundation for subsequent reinforcement learning extensions. In the post-training stage, in addition to human preference alignment for dialogue scenarios, we also enhanced the model's performance in instruction following, engineering code, and function calling using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in areas such as engineering code, Artifact generation, function calling, search-based Q&A, and report generation. Some benchmarks even rival larger models like GPT-4o and DeepSeek-V3-0324 (671B).
**GLM-Z1-32B-0414** is a reasoning model with **deep thinking capabilities**. This was developed based on GLM-4-32B-0414 through cold start and extended reinforcement learning, as well as further training of the model on tasks involving mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During the training process, we also introduced general reinforcement learning based on pairwise ranking feedback, further enhancing the model's general capabilities.
**GLM-Z1-Rumination-32B-0414** is a deep reasoning model with **rumination capabilities** (benchmarked against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model employs longer periods of deep thought to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). The rumination model integrates search tools during its deep thinking process to handle complex tasks and is trained by utilizing multiple rule-based rewards to guide and extend end-to-end reinforcement learning. Z1-Rumination shows significant improvements in research-style writing and complex retrieval tasks.
Finally, **GLM-Z1-9B-0414** is a surprise. We employed the aforementioned series of techniques to train a 9B small-sized model that maintains the open-source tradition. Despite its smaller scale, GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is already at a leading level among open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment.
## Performance
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-32B.png">
</p>
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-9B.png">
</p>
## Model Usage Guidelines
### I. Sampling Parameters
| Parameter | Recommended Value | Description |
| ------------ | ----------------- | -------------------------------------------- |
| temperature | **0.6** | Balances creativity and stability |
| top_p | **0.95** | Cumulative probability threshold for sampling|
| top_k | **40** | Filters out rare tokens while maintaining diversity |
| max_new_tokens | **30000** | Leaves enough tokens for thinking |
### II. Enforced Thinking
- Add \<think\>\n to the **first line**: Ensures the model thinks before responding
- When using `chat_template.jinja`, the prompt is automatically injected to enforce this behavior
### III. Dialogue History Trimming
- Retain only the **final user-visible reply**.
Hidden thinking content should **not** be saved to history to reduce interference—this is already implemented in `chat_template.jinja`
### IV. Handling Long Contexts (YaRN)
- When input length exceeds **8,192 tokens**, consider enabling YaRN (Rope Scaling)
- In supported frameworks, add the following snippet to `config.json`:
```json
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
```
- **Static YaRN** applies uniformly to all text. It may slightly degrade performance on short texts, so enable as needed.
## Inference Code
Make Sure Using `transforemrs>=4.51.3`.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "THUDM/GLM-4-Z1-9B-0414"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto")
message = [{"role": "user", "content": "Let a, b be positive real numbers such that ab = a + b + 3. Determine the range of possible values for a + b."}]
inputs = tokenizer.apply_chat_template(
message,
return_tensors="pt",
add_generation_prompt=True,
return_dict=True,
).to(model.device)
generate_kwargs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"max_new_tokens": 4096,
"do_sample": False,
}
out = model.generate(**generate_kwargs)
print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
```
## Citations
If you find our work useful, please consider citing the following paper.
```
@misc{glm2024chatglm,
title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools},
author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang},
year={2024},
eprint={2406.12793},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
vmpsergio/7a6d0cd0-604b-4918-aeca-f8c4d04a49ed | vmpsergio | 2025-04-28T10:07:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T09:56:16Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7a6d0cd0-604b-4918-aeca-f8c4d04a49ed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- b04b6110aa36d62e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b04b6110aa36d62e_train_data.json
type:
field_input: original_version
field_instruction: title
field_output: french_version
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/7a6d0cd0-604b-4918-aeca-f8c4d04a49ed
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/b04b6110aa36d62e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 90278a8d-6dbd-4954-a0a1-d18084d85b28
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 90278a8d-6dbd-4954-a0a1-d18084d85b28
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7a6d0cd0-604b-4918-aeca-f8c4d04a49ed
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5562 | 0.0171 | 200 | 1.4409 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/ChiseLLM-7B-Mix-i1-GGUF | mradermacher | 2025-04-28T10:05:33Z | 677 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:observerw/ChiseLLM-Completion",
"dataset:observerw/ChiseLLM-Decompile",
"base_model:observerw/ChiseLLM-7B",
"base_model:quantized:observerw/ChiseLLM-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-14T13:26:31Z | ---
base_model: observerw/ChiseLLM-7B
datasets:
- observerw/ChiseLLM-Completion
- observerw/ChiseLLM-Decompile
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/observerw/ChiseLLM-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/ChiseLLM-7B-Mix-i1-GGUF/resolve/main/ChiseLLM-7B-Mix.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/m1-32b-i1-GGUF | mradermacher | 2025-04-28T10:04:00Z | 791 | 1 | transformers | [
"transformers",
"gguf",
"multi-agent systems",
"multiagent-collaboration",
"reasoning",
"mathematics",
"code",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Can111/m1-32b",
"base_model:quantized:Can111/m1-32b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-15T19:16:32Z | ---
base_model: Can111/m1-32b
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multi-agent systems
- multiagent-collaboration
- reasoning
- mathematics
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Can111/m1-32b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/m1-32b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-i1-GGUF/resolve/main/m1-32b.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
akseljoonas/Agentic-Qwen-3B-e12-lr3-b8 | akseljoonas | 2025-04-28T10:01:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:smolagents/training-traces",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T09:50:39Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: smolagents/training-traces
library_name: transformers
model_name: Agentic-Qwen-3B-e12-lr3-b8
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Agentic-Qwen-3B-e12-lr3-b8
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [smolagents/training-traces](https://huggingface.co/datasets/smolagents/training-traces) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="akseljoonas/Agentic-Qwen-3B-e12-lr3-b8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/akseljoonas-university-of-groningen/huggingface/runs/khmq58rv)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Anuars/gu2 | Anuars | 2025-04-28T10:00:53Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T09:35:51Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: gulmira
---
# Gu2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `gulmira` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "gulmira",
"lora_weights": "https://huggingface.co/Anuars/gu2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Anuars/gu2', weight_name='lora.safetensors')
image = pipeline('gulmira').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Anuars/gu2/discussions) to add images that show off what you’ve made with this LoRA.
|
kkokas/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leggy_robust_wolf | kkokas | 2025-04-28T09:59:43Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am leggy robust wolf",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-02T08:55:29Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leggy_robust_wolf
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am leggy robust wolf
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leggy_robust_wolf
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kkokas/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leggy_robust_wolf", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BBorg/a2c-PandaReachDense-v3 | BBorg | 2025-04-28T09:59:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T09:54:59Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hellork/Rombo-LLM-V2.5-Qwen-7b-IQ3_XXS-GGUF | hellork | 2025-04-28T09:59:14Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Rombo-Org/Rombo-LLM-V2.5-Qwen-7b",
"base_model:quantized:Rombo-Org/Rombo-LLM-V2.5-Qwen-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T09:45:37Z | ---
base_model: Rombo-Org/Rombo-LLM-V2.5-Qwen-7b
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# TESTING...TESTING! The quantization used on this model may reduce quality, but it might offer some speedup with <= 4GB VRAM. TESTING...
# hellork/Rombo-LLM-V2.5-Qwen-7b-IQ3_XXS-GGUF
This model was converted to GGUF format from [`Rombo-Org/Rombo-LLM-V2.5-Qwen-7b`](https://huggingface.co/Rombo-Org/Rombo-LLM-V2.5-Qwen-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Rombo-Org/Rombo-LLM-V2.5-Qwen-7b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
# Compile to take advantage of `Nvidia CUDA` hardware:
```bash
git clone https://github.com/ggerganov/llama.cpp.git
cd llama*
# look at docs for other hardware builds or to make sure none of this has changed.
cmake -B build -DGGML_CUDA=ON
CMAKE_ARGS="-DGGML_CUDA=on" cmake --build build --config Release # -j6 (optional: use a number less than the number of cores)
## Without CUDA
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo hellork/Rombo-LLM-V2.5-Qwen-7b-IQ3_XXS-GGUF --hf-file rombo-llm-v2.5-qwen-7b-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo hellork/Rombo-LLM-V2.5-Qwen-7b-IQ3_XXS-GGUF --hf-file rombo-llm-v2.5-qwen-7b-iq3_xxs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo hellork/Rombo-LLM-V2.5-Qwen-7b-IQ3_XXS-GGUF --hf-file rombo-llm-v2.5-qwen-7b-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo hellork/Rombo-LLM-V2.5-Qwen-7b-IQ3_XXS-GGUF --hf-file rombo-llm-v2.5-qwen-7b-iq3_xxs-imat.gguf -c 2048
```
|
mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF | mradermacher | 2025-04-28T09:59:03Z | 448 | 1 | transformers | [
"transformers",
"gguf",
"medical",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:ggbaobao/medc_llm_based_on_qwen2.5",
"base_model:quantized:ggbaobao/medc_llm_based_on_qwen2.5",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-25T16:46:49Z | ---
base_model: ggbaobao/medc_llm_based_on_qwen2.5
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ggbaobao/medc_llm_based_on_qwen2.5
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/medc_llm_based_on_qwen2.5-i1-GGUF/resolve/main/medc_llm_based_on_qwen2.5.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jananiranjith/tamil-llama-finetuned | jananiranjith | 2025-04-28T09:57:39Z | 31 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"ta",
"en",
"dataset:saillab/alpaca-tamil-cleaned",
"arxiv:1910.09700",
"arxiv:2311.05845",
"base_model:abhinand/tamil-llama-7b-base-v0.1",
"base_model:quantized:abhinand/tamil-llama-7b-base-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-17T16:22:00Z | ---
library_name: transformers
datasets:
- saillab/alpaca-tamil-cleaned
language:
- ta
- en
base_model:
- abhinand/tamil-llama-7b-base-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Janani Ranjithkumar
- AKilasri L
- Divya Bharathi M
- Kanmani K ]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
@inproceedings{upadhayay2024taco,
title={TaCo: Enhancing Cross-Lingual Transfer for Low-Resource Languages in {LLM}s through Translation-Assisted Chain-of-Thought Processes},
author={Bibek Upadhayay and Vahid Behzadan},
booktitle={5th Workshop on practical ML for limited/low resource settings, ICLR},
year={2024},
url={https://openreview.net/forum?id=02MLWBj8HP}
}
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JohnConnor123/Qwen2.5-1.5B-Instruct-Q5_K_M | JohnConnor123 | 2025-04-28T09:57:23Z | 7 | 0 | null | [
"gguf",
"qwen2",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-20T18:31:48Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: mit
---
# Model Card for Qwen2.5-1.5B-Instruct-Q5_K_M
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
GGUF quantization type Q5_K_M
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** Qwen/Qwen2.5-1.5B-Instruct
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chchen/MentaLLaMA-chat-7B-PsyCourse-fold7 | chchen | 2025-04-28T09:53:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:klyang/MentaLLaMA-chat-7B-hf",
"base_model:adapter:klyang/MentaLLaMA-chat-7B-hf",
"license:mit",
"region:us"
] | null | 2025-04-27T21:58:57Z | ---
library_name: peft
license: mit
base_model: klyang/MentaLLaMA-chat-7B-hf
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: MentaLLaMA-chat-7B-PsyCourse-fold7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MentaLLaMA-chat-7B-PsyCourse-fold7
This model is a fine-tuned version of [klyang/MentaLLaMA-chat-7B-hf](https://huggingface.co/klyang/MentaLLaMA-chat-7B-hf) on the course-train-fold7 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8412 | 0.0764 | 50 | 0.6190 |
| 0.1455 | 0.1528 | 100 | 0.1069 |
| 0.0861 | 0.2292 | 150 | 0.0647 |
| 0.0575 | 0.3056 | 200 | 0.0518 |
| 0.0643 | 0.3820 | 250 | 0.0469 |
| 0.0341 | 0.4584 | 300 | 0.0435 |
| 0.0641 | 0.5348 | 350 | 0.0413 |
| 0.0405 | 0.6112 | 400 | 0.0419 |
| 0.0531 | 0.6875 | 450 | 0.0385 |
| 0.041 | 0.7639 | 500 | 0.0372 |
| 0.0283 | 0.8403 | 550 | 0.0353 |
| 0.041 | 0.9167 | 600 | 0.0330 |
| 0.0553 | 0.9931 | 650 | 0.0363 |
| 0.0314 | 1.0695 | 700 | 0.0310 |
| 0.0211 | 1.1459 | 750 | 0.0312 |
| 0.0314 | 1.2223 | 800 | 0.0320 |
| 0.0325 | 1.2987 | 850 | 0.0315 |
| 0.0351 | 1.3751 | 900 | 0.0305 |
| 0.0402 | 1.4515 | 950 | 0.0314 |
| 0.0262 | 1.5279 | 1000 | 0.0299 |
| 0.026 | 1.6043 | 1050 | 0.0302 |
| 0.024 | 1.6807 | 1100 | 0.0314 |
| 0.0487 | 1.7571 | 1150 | 0.0302 |
| 0.0251 | 1.8335 | 1200 | 0.0300 |
| 0.028 | 1.9099 | 1250 | 0.0320 |
| 0.0244 | 1.9862 | 1300 | 0.0299 |
| 0.0211 | 2.0626 | 1350 | 0.0282 |
| 0.019 | 2.1390 | 1400 | 0.0285 |
| 0.012 | 2.2154 | 1450 | 0.0302 |
| 0.0181 | 2.2918 | 1500 | 0.0283 |
| 0.0176 | 2.3682 | 1550 | 0.0288 |
| 0.0136 | 2.4446 | 1600 | 0.0277 |
| 0.0217 | 2.5210 | 1650 | 0.0286 |
| 0.0156 | 2.5974 | 1700 | 0.0294 |
| 0.0191 | 2.6738 | 1750 | 0.0286 |
| 0.0249 | 2.7502 | 1800 | 0.0272 |
| 0.0237 | 2.8266 | 1850 | 0.0290 |
| 0.021 | 2.9030 | 1900 | 0.0278 |
| 0.0174 | 2.9794 | 1950 | 0.0283 |
| 0.0122 | 3.0558 | 2000 | 0.0290 |
| 0.0137 | 3.1322 | 2050 | 0.0301 |
| 0.0086 | 3.2086 | 2100 | 0.0309 |
| 0.0136 | 3.2850 | 2150 | 0.0306 |
| 0.0111 | 3.3613 | 2200 | 0.0310 |
| 0.0142 | 3.4377 | 2250 | 0.0327 |
| 0.0114 | 3.5141 | 2300 | 0.0312 |
| 0.015 | 3.5905 | 2350 | 0.0319 |
| 0.0088 | 3.6669 | 2400 | 0.0300 |
| 0.0068 | 3.7433 | 2450 | 0.0310 |
| 0.0098 | 3.8197 | 2500 | 0.0300 |
| 0.0088 | 3.8961 | 2550 | 0.0298 |
| 0.0081 | 3.9725 | 2600 | 0.0306 |
| 0.0052 | 4.0489 | 2650 | 0.0314 |
| 0.0076 | 4.1253 | 2700 | 0.0326 |
| 0.0091 | 4.2017 | 2750 | 0.0331 |
| 0.0045 | 4.2781 | 2800 | 0.0342 |
| 0.0047 | 4.3545 | 2850 | 0.0347 |
| 0.0047 | 4.4309 | 2900 | 0.0358 |
| 0.005 | 4.5073 | 2950 | 0.0359 |
| 0.0049 | 4.5837 | 3000 | 0.0363 |
| 0.0039 | 4.6600 | 3050 | 0.0363 |
| 0.0062 | 4.7364 | 3100 | 0.0366 |
| 0.0054 | 4.8128 | 3150 | 0.0366 |
| 0.0041 | 4.8892 | 3200 | 0.0366 |
| 0.0047 | 4.9656 | 3250 | 0.0366 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
dylanewbie/whisper-large-v2-ft-Jana-BTU6567_mix8_snr6_base-on-car350-250428-v2 | dylanewbie | 2025-04-28T09:51:23Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T09:51:18Z | ---
base_model: openai/whisper-large-v2
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-large-v2-ft-Jana-BTU6567_mix8_snr6_base-on-car350-250428-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v2-ft-Jana-BTU6567_mix8_snr6_base-on-car350-250428-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 10.3916 | 1.0 | 1 | 10.5424 |
| 10.3581 | 2.0 | 2 | 10.5424 |
| 10.3671 | 3.0 | 3 | 10.5424 |
| 10.401 | 4.0 | 4 | 10.5424 |
| 10.3032 | 5.0 | 5 | 10.5424 |
| 10.3819 | 6.0 | 6 | 10.4732 |
| 10.3083 | 7.0 | 7 | 10.2188 |
| 10.148 | 8.0 | 8 | 9.7890 |
| 9.6059 | 9.0 | 9 | 9.7890 |
| 9.7222 | 10.0 | 10 | 9.7890 |
| 9.6653 | 11.0 | 11 | 9.7890 |
| 9.7595 | 12.0 | 12 | 9.1966 |
| 9.1529 | 13.0 | 13 | 8.4308 |
| 8.4287 | 14.0 | 14 | 7.4996 |
| 7.5551 | 15.0 | 15 | 6.6037 |
| 6.601 | 16.0 | 16 | 5.9718 |
| 5.9646 | 17.0 | 17 | 5.6182 |
| 5.5941 | 18.0 | 18 | 5.1765 |
| 5.1806 | 19.0 | 19 | 4.8658 |
| 4.8027 | 20.0 | 20 | 4.6209 |
| 4.543 | 21.0 | 21 | 4.6209 |
| 4.5359 | 22.0 | 22 | 4.5022 |
| 4.4192 | 23.0 | 23 | 4.4469 |
| 4.3674 | 24.0 | 24 | 4.3916 |
| 4.3205 | 25.0 | 25 | 4.3285 |
| 4.256 | 26.0 | 26 | 4.2549 |
| 4.1802 | 27.0 | 27 | 4.1693 |
| 4.0941 | 28.0 | 28 | 4.0703 |
| 3.9866 | 29.0 | 29 | 3.9497 |
| 3.8535 | 30.0 | 30 | 3.8104 |
| 3.7066 | 31.0 | 31 | 3.6442 |
| 3.5391 | 32.0 | 32 | 3.4423 |
| 3.3251 | 33.0 | 33 | 3.2242 |
| 3.0761 | 34.0 | 34 | 3.0329 |
| 2.9123 | 35.0 | 35 | 2.8894 |
| 2.7919 | 36.0 | 36 | 2.7920 |
| 2.7079 | 37.0 | 37 | 2.7183 |
| 2.6552 | 38.0 | 38 | 2.6559 |
| 2.5932 | 39.0 | 39 | 2.6002 |
| 2.5476 | 40.0 | 40 | 2.5471 |
| 2.4928 | 41.0 | 41 | 2.4972 |
| 2.4491 | 42.0 | 42 | 2.4524 |
| 2.4105 | 43.0 | 43 | 2.4124 |
| 2.3736 | 44.0 | 44 | 2.3768 |
| 2.3434 | 45.0 | 45 | 2.3443 |
| 2.3184 | 46.0 | 46 | 2.3126 |
| 2.286 | 47.0 | 47 | 2.2816 |
| 2.2498 | 48.0 | 48 | 2.2509 |
| 2.2296 | 49.0 | 49 | 2.2202 |
| 2.1987 | 50.0 | 50 | 2.1894 |
| 2.1668 | 51.0 | 51 | 2.1583 |
| 2.1356 | 52.0 | 52 | 2.1268 |
| 2.1045 | 53.0 | 53 | 2.0963 |
| 2.0735 | 54.0 | 54 | 2.0655 |
| 2.042 | 55.0 | 55 | 2.0355 |
| 2.009 | 56.0 | 56 | 2.0060 |
| 1.9836 | 57.0 | 57 | 1.9768 |
| 1.9536 | 58.0 | 58 | 1.9486 |
| 1.9251 | 59.0 | 59 | 1.9202 |
| 1.8913 | 60.0 | 60 | 1.8916 |
| 1.8655 | 61.0 | 61 | 1.8631 |
| 1.8379 | 62.0 | 62 | 1.8348 |
| 1.8093 | 63.0 | 63 | 1.8061 |
| 1.7794 | 64.0 | 64 | 1.7771 |
| 1.7491 | 65.0 | 65 | 1.7476 |
| 1.7222 | 66.0 | 66 | 1.7176 |
| 1.6909 | 67.0 | 67 | 1.6878 |
| 1.659 | 68.0 | 68 | 1.6585 |
| 1.6295 | 69.0 | 69 | 1.6287 |
| 1.6019 | 70.0 | 70 | 1.5988 |
| 1.5732 | 71.0 | 71 | 1.5692 |
| 1.5425 | 72.0 | 72 | 1.5388 |
| 1.5123 | 73.0 | 73 | 1.5086 |
| 1.4763 | 74.0 | 74 | 1.4782 |
| 1.4454 | 75.0 | 75 | 1.4472 |
| 1.4168 | 76.0 | 76 | 1.4163 |
| 1.3882 | 77.0 | 77 | 1.3848 |
| 1.3562 | 78.0 | 78 | 1.3539 |
| 1.328 | 79.0 | 79 | 1.3222 |
| 1.2881 | 80.0 | 80 | 1.2905 |
| 1.2618 | 81.0 | 81 | 1.2594 |
| 1.2282 | 82.0 | 82 | 1.2275 |
| 1.199 | 83.0 | 83 | 1.1964 |
| 1.1677 | 84.0 | 84 | 1.1643 |
| 1.1363 | 85.0 | 85 | 1.1334 |
| 1.104 | 86.0 | 86 | 1.1026 |
| 1.0739 | 87.0 | 87 | 1.0723 |
| 1.0453 | 88.0 | 88 | 1.0424 |
| 1.0107 | 89.0 | 89 | 1.0125 |
| 0.9816 | 90.0 | 90 | 0.9829 |
| 0.9487 | 91.0 | 91 | 0.9538 |
| 0.9232 | 92.0 | 92 | 0.9252 |
| 0.8976 | 93.0 | 93 | 0.8977 |
| 0.8621 | 94.0 | 94 | 0.8708 |
| 0.8342 | 95.0 | 95 | 0.8447 |
| 0.8169 | 96.0 | 96 | 0.8200 |
| 0.7841 | 97.0 | 97 | 0.7964 |
| 0.7613 | 98.0 | 98 | 0.7739 |
| 0.7386 | 99.0 | 99 | 0.7530 |
| 0.7085 | 100.0 | 100 | 0.7334 |
### Framework versions
- PEFT 0.13.0
- Transformers 4.45.1
- Pytorch 2.5.0+cu124
- Datasets 2.21.0
- Tokenizers 0.20.0 |
nqdhocai/LogicLlama-3.2-1B-MALLS-v0 | nqdhocai | 2025-04-28T09:45:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T09:45:02Z | ---
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nqdhocai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YOYO-AI/QwQ-instruct-32B | YOYO-AI | 2025-04-28T09:45:31Z | 17 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Qwen/QwQ-32B",
"base_model:merge:Qwen/QwQ-32B",
"base_model:Qwen/Qwen2.5-32B",
"base_model:merge:Qwen/Qwen2.5-32B",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:merge:Qwen/Qwen2.5-32B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-20T10:03:57Z | ---
base_model:
- Qwen/QwQ-32B
- Qwen/Qwen2.5-32B
- Qwen/Qwen2.5-32B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)
* [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: sce
models:
# Pivot model
- model: Qwen/Qwen2.5-32B
# Target models
- model: Qwen/QwQ-32B
- model: Qwen/Qwen2.5-32B-Instruct
base_model: Qwen/Qwen2.5-32B
parameters:
select_topk: 1
dtype: bfloat16
tokenizer_source: Qwen/QwQ-32B
normalize: true
int8_mask: true
``` |
EasierAI/Qwen-2.5-0.5B | EasierAI | 2025-04-28T09:44:46Z | 19 | 0 | null | [
"gguf",
"chat",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-02-12T16:25:08Z | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
quantized_by: bartowski
---
## 💫 Community Model> Qwen2.5 0.5B Instruct by Qwen
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3772](https://github.com/ggerganov/llama.cpp/releases/tag/b3772)<br>
## Technical Details
Long context: Support for 32k tokens and 8k token generation
Large-scale training dataset: Encompasses a huge range of knowledge.
Enhanced structured data understanding and generation.
Over 29 languages including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic.
More details available [here](https://qwenlm.github.io/blog/qwen2.5/)
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
rgM2nfc50k/rgM2nfc50k | rgM2nfc50k | 2025-04-28T09:43:47Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-28T09:43:47Z | ---
license: bigcode-openrail-m
---
|
Silin1590/Mathstral-7B-Soc-CoA-Ep1 | Silin1590 | 2025-04-28T09:38:07Z | 0 | 0 | null | [
"safetensors",
"mistral",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T09:36:36Z | ---
license: apache-2.0
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
# Model Card for Mathstral-7b-v0.1
Mathstral 7B is a model specializing in mathematical and scientific tasks, based on Mistral 7B.
You can read more in the [official blog post](https://mistral.ai/news/mathstral/).
## Installation
It is recommended to use `mistralai/Mathstral-7b-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference)
```
pip install mistral_inference>=1.2.0
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Mathstral-7b-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mathstral-7b-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment.
```
mistral-chat $HOME/mistral_models/Mathstral-7b-v0.1 --instruct --max_tokens 256
```
You can then start chatting with the model, *e.g.* prompt it with something like:
*"Albert likes to surf every week. Each surfing session lasts for 4 hours and costs $20 per hour. How much would Albert spend in 5 weeks?"*
### Usage in `transformers`
To use this model within the `transformers` library, install the latest release with `pip install --upgrade transformers` and run, for instance:
```py
from transformers import pipeline
import torch
checkpoint = "mistralai/Mathstral-7b-v0.1"
pipe = pipeline("text-generation", checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
prompt = [{"role": "user", "content": "What are the roots of unity?"}]
out = pipe(prompt, max_new_tokens = 512)
print(out[0]['generated_text'][-1])
>>> "{'role': 'assistant', 'content': ' The roots of unity are the complex numbers that satisfy the equation $z^n = 1$, where $n$ is a positive integer. These roots are evenly spaced around the unit circle in the complex plane, and they have a variety of interesting properties and applications in mathematics and physics.'}"
```
You can also manually tokenize the input and generate text from the model, rather than using the higher-level pipeline:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
checkpoint = "mistralai/Mathstral-7b-v0.1"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
prompt = [{"role": "user", "content": "What are the roots of unity?"}]
tokenized_prompt = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, return_dict=True, return_tensors="pt").to(model.device)
out = model.generate(**tokenized_prompt, max_new_tokens=512)
tokenizer.decode(out[0])
>>> '<s>[INST] What are the roots of unity?[/INST] The roots of unity are the complex numbers that satisfy the equation $z^n = 1$, where $n$ is a positive integer. These roots are evenly spaced around the unit circle in the complex plane, and they have a variety of interesting properties and applications in mathematics and physics.</s>'
```
## Evaluation
We evaluate Mathstral 7B and open-weight models of the similar size on industry-standard benchmarks.
| Benchmarks | MATH | GSM8K (8-shot) | Odyssey Math maj@16 | GRE Math maj@16 | AMC 2023 maj@16 | AIME 2024 maj@16
| :--- | :---: | :---: | :---: | :---: | :---: | :---: |
| Mathstral 7B | **56.6** | 77.1 | **37.2** | 56.9 | **42.4** | **2/30** |
| DeepSeek Math 7B | 44.4 | **80.6** | 27.6 | 44.6 | 28.0 | 0/30 |
| Llama3 8B | 28.4 | 75.4 | 24.0 | 26.2 | 34.4 | 0/30 |
| GLM4 9B | 50.2 | 48.8 | 18.9 | 46.2 | 36.0 | 1/30 |
| QWen2 7B | **56.8** | 32.7 | 24.8 | **58.5** | 35.2 | **2/30** |
| Gemma2 9B | 48.3 | 69.5 | 18.6 | 52.3 | 31.2 | 1/30 |
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall |
eMhGL4E2s70/kskhfjsk | eMhGL4E2s70 | 2025-04-28T09:35:02Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T09:35:02Z | ---
license: apache-2.0
---
|
EPdyBV5Vbfqd/jkkgdsa | EPdyBV5Vbfqd | 2025-04-28T09:34:13Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T09:34:13Z | ---
license: apache-2.0
---
|
AudreyTrungNguyen/openmath-32b-classifymath | AudreyTrungNguyen | 2025-04-28T09:33:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T08:55:19Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
khanniazi9118/GPU | khanniazi9118 | 2025-04-28T09:29:38Z | 0 | 0 | null | [
"license:cc-by-nc-nd-3.0",
"region:us"
] | null | 2025-04-28T09:29:38Z | ---
license: cc-by-nc-nd-3.0
---
|
Volko76/Qwen2.5-0.5B-Instruct-Q5_K_M-GGUF | Volko76 | 2025-04-28T09:29:29Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-30T21:57:03Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- chat
- llama-cpp
- gguf-my-repo
library_name: transformers
---
# Volko76/Qwen2.5-0.5B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-0.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Volko76/Qwen2.5-0.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Volko76/Qwen2.5-0.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Volko76/Qwen2.5-0.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Volko76/Qwen2.5-0.5B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q5_k_m.gguf -c 2048
```
|
AI-Safeguard/Ivy-VL-llava | AI-Safeguard | 2025-04-28T09:28:58Z | 622 | 66 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"multimodal",
"llava",
"visual-question-answering",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2024-12-07T00:21:11Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-3B-Instruct
- google/siglip-so400m-patch14-384
tags:
- multimodal
- llava
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: visual-question-answering
library_name: transformers
---

<code>Ivy-VL</code> is a lightweight multimodal model with only 3B parameters.
It accepts both image and text inputs to generate text outputs.
Thanks to its lightweight design, it can be deployed on edge devices such as AI glasses and smartphones, offering low memory usage and high speed while maintaining strong performance on multimodal tasks. Some well-known small models include [PaliGemma 3B](https://huggingface.co/google/paligemma-3b-mix-448), [Moondream2](https://huggingface.co/vikhyatk/moondream2), [Qwen2-VL-2B](https://huggingface.co/Qwen/Qwen2-VL-2B), [InternVL2-2B](https://huggingface.co/OpenGVLab/InternVL2-2B), and [InternVL2_5-2B](https://huggingface.co/OpenGVLab/InternVL2_5-2B). Ivy-VL outperforms them on multiple benchmarks.
# Model Summary:
* Developed: AI Safeguard, CMU, Standford
* Model type: Multi-modal model (image+text)
* Language: Engligh and Chinese
* License: Apache 2.0
* Architecture: Based on LLaVA-One-Vision
* LLM: Qwen/Qwen2.5-3B-Instruct
* Vision Encoder: google/siglip-so400m-patch14-384
* Notebook demo: [Ivy-VL-demo.ipynb](https://colab.research.google.com/drive/1D5_8sDRcP1HKlWtlqTH7s64xG8OH9NH0?usp=sharing)
# Evaluation:

Most of the performance data comes from the VLMEvalKit leaderboard or the original papers. We conducted evaluations using VLMEvalKit. Due to differences in environments and the LLMs used for evaluation, there may be slight variations in performance.
# How to use:
```python
# pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
from llava.model.builder import load_pretrained_model
from llava.mm_utils import process_images, tokenizer_image_token
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN
from llava.conversation import conv_templates
from PIL import Image
import requests
import copy
import torch
import warnings
warnings.filterwarnings("ignore")
pretrained = "AI-Safeguard/Ivy-VL-llava"
model_name = "llava_qwen"
device = "cuda"
device_map = "auto"
tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map) # Add any other thing you want to pass in llava_model_args
model.eval()
# load image from url
url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"
image = Image.open(requests.get(url, stream=True).raw)
# load image from local environment
# url = "./local_image.jpg"
# image = Image.open(url)
image_tensor = process_images([image], image_processor, model.config)
image_tensor = [_image.to(dtype=torch.float16, device=device) for _image in image_tensor]
conv_template = "qwen_1_5" # Make sure you use correct chat template for different models
question = DEFAULT_IMAGE_TOKEN + "\nWhat is shown in this image?"
conv = copy.deepcopy(conv_templates[conv_template])
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)
prompt_question = conv.get_prompt()
input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device)
image_sizes = [image.size]
cont = model.generate(
input_ids,
images=image_tensor,
image_sizes=image_sizes,
do_sample=False,
temperature=0,
max_new_tokens=4096,
)
text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)
print(text_outputs)
```
# Future Plan:
* We plan to release more versions of LLMs in different sizes.
* We will focus on improving the performance of the video modality.
# Contact:
Feel free to contact us if you have any questions or suggestions📧:
* Email (Ivy Zhang): [email protected]
# Citation:
If you find our work helpful, please consider citing our Model:
```plaintext
@misc{ivy2024ivy-vl,
title={Ivy-VL:Compact Vision-Language Models Achieving SOTA with Optimal Data},
url={https://huggingface.co/AI-Safeguard/Ivy-VL-llava},
author={Ivy Zhang,Wei Peng,Jenny N,Theresa Yu and David Qiu},
month={December},
year={2024}
}
``` |
NocturneVi/swin-tiny-patch4-window7-224-finetuned-eurosat | NocturneVi | 2025-04-28T09:26:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-04-28T09:04:51Z | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0608
- Accuracy: 0.9796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1886 | 1.0 | 190 | 0.1029 | 0.9637 |
| 0.1368 | 2.0 | 380 | 0.0765 | 0.9752 |
| 0.129 | 3.0 | 570 | 0.0608 | 0.9796 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
OneGIS/one-mapsabah-qwen2.5-7B-Instruct-v0 | OneGIS | 2025-04-28T09:20:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-27T13:48:28Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: one-mapsabah-qwen2.5-7B-Instruct-v0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# one-mapsabah-qwen2.5-7B-Instruct-v0
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 7
- gradient_accumulation_steps: 16
- total_train_batch_size: 112
- total_eval_batch_size: 56
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.4.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Triangle104/Qwen2.5-0.5B-Instruct-Q8_0-GGUF | Triangle104 | 2025-04-28T09:19:16Z | 14 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-22T17:52:25Z | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-0.5B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-0.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-0.5B-Instruct-Q8_0-GGUF --hf-file qwen2.5-0.5b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-0.5B-Instruct-Q8_0-GGUF --hf-file qwen2.5-0.5b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-0.5B-Instruct-Q8_0-GGUF --hf-file qwen2.5-0.5b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-0.5B-Instruct-Q8_0-GGUF --hf-file qwen2.5-0.5b-instruct-q8_0.gguf -c 2048
```
|
mradermacher/GIGABATEMAN-7B-GGUF | mradermacher | 2025-04-28T09:19:05Z | 126 | 3 | transformers | [
"transformers",
"gguf",
"text2text-generation",
"mistral",
"merge",
"en",
"base_model:DZgas/GIGABATEMAN-7B",
"base_model:quantized:DZgas/GIGABATEMAN-7B",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-04-17T17:25:49Z | ---
base_model: DZgas/GIGABATEMAN-7B
language:
- en
library_name: transformers
model_creator: DZgas
model_name: GIGABATEMAN-7B
quantized_by: mradermacher
tags:
- text2text-generation
- mistral
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/DZgas/GIGABATEMAN-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mukel/Qwen2.5-0.5B-Instruct-GGUF | mukel | 2025-04-28T09:18:51Z | 5 | 0 | null | [
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-23T00:04:38Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct-GGUF/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
pipeline_tag: text-generation
quantized_by: mukel
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
# GGUF models for qwen2.java
Pure .gguf Q4_0 and Q8_0 quantizations of Qwen 2.5 models, ready to consume by `qwen2.java`.
In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the token embeddings are quantized with Q6_K, instead of Q4_0.
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the llama-quantize utility from llama.cpp as follows:
```
./llama-quantize --pure ./Qwen-2.5-7B-Instruct-BF16.gguf ./Qwen-2.5-7B-Instruct-Q4_0.gguf Q4_0
```
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
Triangle104/Qwen2.5-0.5B-Instruct-Q5_K_S-GGUF | Triangle104 | 2025-04-28T09:18:50Z | 7 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-22T17:50:31Z | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-0.5B-Instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-0.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-0.5B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-0.5b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-0.5B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-0.5b-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-0.5B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-0.5b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-0.5B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-0.5b-instruct-q5_k_s.gguf -c 2048
```
|
Triangle104/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF | Triangle104 | 2025-04-28T09:18:41Z | 6 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-22T17:47:16Z | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-0.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q4_k_m.gguf -c 2048
```
|
Hamdha/Math_MCQ_Lora_Adapter_3000 | Hamdha | 2025-04-28T09:18:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T11:41:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cm9wnxpfc00zotkjbrbz058ff_cma0ubs4z003l12tvobs2lya8 | BootesVoid | 2025-04-28T09:17:22Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T09:17:20Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ADREANA
---
# Cm9Wnxpfc00Zotkjbrbz058Ff_Cma0Ubs4Z003L12Tvobs2Lya8
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ADREANA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ADREANA",
"lora_weights": "https://huggingface.co/BootesVoid/cm9wnxpfc00zotkjbrbz058ff_cma0ubs4z003l12tvobs2lya8/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cm9wnxpfc00zotkjbrbz058ff_cma0ubs4z003l12tvobs2lya8', weight_name='lora.safetensors')
image = pipeline('ADREANA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cm9wnxpfc00zotkjbrbz058ff_cma0ubs4z003l12tvobs2lya8/discussions) to add images that show off what you’ve made with this LoRA.
|
Hassnain-work/user-67bd4c1ed1511b7c9ef4c78c-model-6fc5ae0c9c224f87bef755d6a0e0e379 | Hassnain-work | 2025-04-28T09:14:49Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T09:04:17Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# User 67Bd4C1Ed1511B7C9Ef4C78C Model 6Fc5Ae0C9C224F87Bef755D6A0E0E379
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Hassnain-work/user-67bd4c1ed1511b7c9ef4c78c-model-6fc5ae0c9c224f87bef755d6a0e0e379/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Hassnain-work/user-67bd4c1ed1511b7c9ef4c78c-model-6fc5ae0c9c224f87bef755d6a0e0e379', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Hassnain-work/user-67bd4c1ed1511b7c9ef4c78c-model-6fc5ae0c9c224f87bef755d6a0e0e379/discussions) to add images that show off what you’ve made with this LoRA.
|
Kutty1012/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-energetic_lanky_orangutan | Kutty1012 | 2025-04-28T09:13:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am energetic lanky orangutan",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T18:07:31Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-energetic_lanky_orangutan
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am energetic lanky orangutan
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-energetic_lanky_orangutan
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Kutty1012/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-energetic_lanky_orangutan", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MayBashendy/ellipse_SDP_1_binary_multilingual_e5_small_lr3e-05_targ1_epoch2150 | MayBashendy | 2025-04-28T09:04:29Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-04-28T09:03:50Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
sergioalves/c1d7c51a-8724-4460-bc06-e36dbee3d2fe | sergioalves | 2025-04-28T09:01:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T08:45:58Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c1d7c51a-8724-4460-bc06-e36dbee3d2fe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 3dd11039ea2f9879_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3dd11039ea2f9879_train_data.json
type:
field_input: description
field_instruction: question
field_output: objective
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: sergioalves/c1d7c51a-8724-4460-bc06-e36dbee3d2fe
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/3dd11039ea2f9879_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 16a27aeb-f760-4a00-a378-a3ec18757692
wandb_project: s56-8
wandb_run: your_name
wandb_runid: 16a27aeb-f760-4a00-a378-a3ec18757692
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c1d7c51a-8724-4460-bc06-e36dbee3d2fe
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9829 | 0.0497 | 200 | 0.9060 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fnlp/Lorsa | fnlp | 2025-04-28T09:00:56Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T09:00:54Z | ---
license: apache-2.0
---
|
Evidnet/gte_ft_measurement | Evidnet | 2025-04-28T08:59:56Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:888",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-28T08:58:58Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:888
- loss:MultipleNegativesRankingLoss
base_model: Alibaba-NLP/gte-multilingual-base
widget:
- source_sentence: Iron Metabolism Test- Ferritin,Nuclear medicine examination qualitative
addition(4%), SST/NM, HAV-Ab(IgG)
sentences:
- Rh [Type] in Blood
- Phosphate [Mass/volume] in Serum or Plasma
- Hepatitis A virus IgG Ab [Presence] in Serum
- source_sentence: Body Fluid-Examination(CSF, Ascites, Pleural Fluid, Joint Fluid)
(Color, Gravity, Cell Count, Differential Count, pH), CSF, BF_Others%, Body fluid
Analysis
sentences:
- Leukocytes other/Leukocytes in Cerebral spinal fluid
- Dandelion IgE Ab [Presence] in Serum by Radioallergosorbent test (RAST)
- Phosphate [Mass/volume] in Serum or Plasma
- source_sentence: AFB Culture and Identification, Wound(Deep), AFB culture [고체배지이용]
sentences:
- Osmolality of Serum or Plasma
- Microscopic observation [Identifier] in Wound by Acid fast stain
- Base excess in Arterial blood by calculation
- source_sentence: Stool WBC,Diagnostic and laboratory test qualitative addition(2%),
Stool, Stool WBC
sentences:
- Lutropin [Units/volume] in Serum or Plasma by Immunoassay
- Hemoglobin.gastrointestinal.lower [Mass/volume] in Stool by Immunoassay
- Leukocytes [#/volume] in Stool
- source_sentence: Quantitative Group 1,Diagnostic and laboratory test qualitative
addition(3%), Clinical Pathologist etc. reading, SST serum, HBV DNA Quan(RQ PCR)
sentences:
- Oxygen saturation in Arterial blood
- Hepatitis B virus DNA [#/volume] (viral load) in Serum or Plasma by NAA with probe
detection
- Microscopic observation [Identifier] in Synovial fluid by Gram stain
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision 9fdd4ee8bba0e2808a34e0e739576f6740d2b225 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Evidnet/gte_ft_measurement")
# Run inference
sentences = [
'Quantitative Group 1,Diagnostic and laboratory test qualitative addition(3%), Clinical Pathologist etc. reading, SST serum, HBV DNA Quan(RQ PCR)',
'Hepatitis B virus DNA [#/volume] (viral load) in Serum or Plasma by NAA with probe detection',
'Microscopic observation [Identifier] in Synovial fluid by Gram stain',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 888 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 888 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 33.04 tokens</li><li>max: 76 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 19.51 tokens</li><li>max: 51 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------|
| <code>Thyroid Stimulating Hormone- Thyroid Stimulating Hormone,Diagnostic and laboratory test qualitative addition(3%), Serum, TSH</code> | <code>Thyrotropin [Units/volume] in Serum or Plasma</code> |
| <code>Calcitonin, Whole Blood, (외주) Calcitonin</code> | <code>Calcitonin [Mass/volume] in Serum or Plasma</code> |
| <code>Gonadotropin- Follicle Stimulating Hormone, Serum, [RIA] FSH</code> | <code>Follitropin [Units/volume] in Serum or Plasma</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.11.10
- Sentence Transformers: 3.4.1
- Transformers: 4.47.1
- PyTorch: 2.6.0+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
DanielNRU/pollen-ner-cycle-850 | DanielNRU | 2025-04-28T08:58:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:adapter:DeepPavlov/rubert-base-cased",
"region:us"
] | null | 2025-04-28T08:49:27Z | ---
library_name: peft
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner-cycle-850
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner-cycle-850
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2821
- Precision: 0.6939
- Recall: 0.7892
- F1: 0.7385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 107 | 0.8808 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 214 | 0.6262 | 0.3862 | 0.1838 | 0.2490 |
| No log | 3.0 | 321 | 0.4651 | 0.5603 | 0.6112 | 0.5846 |
| No log | 4.0 | 428 | 0.3896 | 0.6267 | 0.7273 | 0.6732 |
| 0.7319 | 5.0 | 535 | 0.3432 | 0.6759 | 0.7544 | 0.7130 |
| 0.7319 | 6.0 | 642 | 0.3170 | 0.6814 | 0.7737 | 0.7246 |
| 0.7319 | 7.0 | 749 | 0.2981 | 0.6879 | 0.7718 | 0.7274 |
| 0.7319 | 8.0 | 856 | 0.2887 | 0.6901 | 0.7795 | 0.7321 |
| 0.7319 | 9.0 | 963 | 0.2820 | 0.6960 | 0.7795 | 0.7354 |
| 0.3086 | 10.0 | 1070 | 0.2821 | 0.6939 | 0.7892 | 0.7385 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1 |
Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF | Triangle104 | 2025-04-28T08:55:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:THUDM/GLM-Z1-Rumination-32B-0414",
"base_model:quantized:THUDM/GLM-Z1-Rumination-32B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T08:51:46Z | ---
base_model: THUDM/GLM-Z1-Rumination-32B-0414
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF
This model was converted to GGUF format from [`THUDM/GLM-Z1-Rumination-32B-0414`](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) for more details on the model.
---
Introduction
-
The GLM family welcomes a new generation of open-source models, the GLM-4-32B-0414
series, featuring 32 billion parameters. Its performance is comparable
to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very
user-friendly local deployment features. GLM-4-32B-Base-0414 was
pre-trained on 15T of high-quality data, including a large amount of
reasoning-type synthetic data, laying the foundation for subsequent
reinforcement learning extensions. In the post-training stage, in
addition to human preference alignment for dialogue scenarios, we also
enhanced the model's performance in instruction following, engineering
code, and function calling using techniques such as rejection sampling
and reinforcement learning, strengthening the atomic capabilities
required for agent tasks. GLM-4-32B-0414 achieves good results in areas
such as engineering code, Artifact generation, function calling,
search-based Q&A, and report generation. Some benchmarks even rival
larger models like GPT-4o and DeepSeek-V3-0324 (671B).
GLM-Z1-Rumination-32B-0414 is a deep reasoning model with rumination capabilities
(benchmarked against OpenAI's Deep Research). Unlike typical deep
thinking models, the rumination model employs longer periods of deep
thought to solve more open-ended and complex problems (e.g., writing a
comparative analysis of AI development in two cities and their future
development plans). The rumination model integrates search tools during
its deep thinking process to handle complex tasks and is trained by
utilizing multiple rule-based rewards to guide and extend end-to-end
reinforcement learning. Z1-Rumination shows significant improvements in
research-style writing and complex retrieval tasks.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF --hf-file glm-z1-rumination-32b-0414-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF --hf-file glm-z1-rumination-32b-0414-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF --hf-file glm-z1-rumination-32b-0414-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF --hf-file glm-z1-rumination-32b-0414-q8_0.gguf -c 2048
```
|
kokovova/2fb9437b-834e-48ee-9e8f-bddba5056d29 | kokovova | 2025-04-28T08:54:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T08:49:12Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2fb9437b-834e-48ee-9e8f-bddba5056d29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 3dd11039ea2f9879_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3dd11039ea2f9879_train_data.json
type:
field_input: description
field_instruction: question
field_output: objective
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/2fb9437b-834e-48ee-9e8f-bddba5056d29
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/3dd11039ea2f9879_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 16a27aeb-f760-4a00-a378-a3ec18757692
wandb_project: s56-4
wandb_run: your_name
wandb_runid: 16a27aeb-f760-4a00-a378-a3ec18757692
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2fb9437b-834e-48ee-9e8f-bddba5056d29
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0356 | 0.0497 | 200 | 0.9235 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
infogeo/19b3e4f6-8386-414e-a934-2767168d5fe6 | infogeo | 2025-04-28T08:51:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T08:47:00Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 19b3e4f6-8386-414e-a934-2767168d5fe6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 3dd11039ea2f9879_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3dd11039ea2f9879_train_data.json
type:
field_input: description
field_instruction: question
field_output: objective
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/19b3e4f6-8386-414e-a934-2767168d5fe6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/3dd11039ea2f9879_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 16a27aeb-f760-4a00-a378-a3ec18757692
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 16a27aeb-f760-4a00-a378-a3ec18757692
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 19b3e4f6-8386-414e-a934-2767168d5fe6
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8456 | 0.0373 | 150 | 2.8009 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Journey9ni/llava_video_7b_qwen2_lora_base | Journey9ni | 2025-04-28T08:49:40Z | 0 | 0 | peft | [
"peft",
"llava",
"region:us"
] | null | 2025-04-28T08:49:22Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
## Training procedure
|
orozcohsu/translation_zh_en | orozcohsu | 2025-04-28T08:48:01Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"machine-translation",
"zh",
"en",
"dataset:custom",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2025-04-10T10:59:39Z | ---
language:
- zh
- en
tags:
- translation
- machine-translation
- transformers
datasets:
- custom
model-index:
- name: transformer-zh-en-finetuned
results:
- task:
type: translation
name: Translation (ZH ➔ EN)
dataset:
name: Custom Dataset
type: custom
metrics:
- type: bleu
value: (填上你的 BLEU 分數)
---
# 中文 ➔ 英文 機器翻譯模型 (Fine-tuned Transformer)
## 📚 模型簡介
本模型基於 [Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) 預訓練權重,針對 **繁體中文 ➔ 英文翻譯任務**進行微調。
使用 Hugging Face `transformers` 庫完成訓練,並以 BLEU 分數作為主要評估指標。
---
## 🔧 訓練資訊
- **基礎模型**:Helsinki-NLP/opus-mt-zh-en
- **資料來源**:自定義資料集,包含繁體中文輸入與英文翻譯目標
- **Tokenization**:自動使用對應 checkpoint 的 tokenizer
- **最大輸入長度**:128
- **訓練方式**:使用 `Seq2SeqTrainer` 完成微調
- **訓練週期(epochs)**:1
- **學習率(learning rate)**:2e-5
- **Batch size**:8(訓練與驗證)
- **保存策略**:每500步保存checkpoint,最多保留3個
---
## 📝 評估方式
- **指標**:BLEU 分數 (使用 sacrebleu)
- **其他設定**:
- `predict_with_generate=True`(生成翻譯以供評分)
- 取100筆小型測試集進行快速驗證
- 使用單束(num_beams=1)生成
---
## 📂 輸出結果
- 訓練過程紀錄於 `training_log.csv`
- 完整保存模型及 tokenizer 至 `./results/transformer_v1`
- 支援直接載入推論 (inference)
---
## ⚡ 推理示範
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("./results/transformer_v1")
tokenizer = AutoTokenizer.from_pretrained("./results/transformer_v1")
inputs = tokenizer("外出要小心注意安全", return_tensors="pt")
outputs = model.generate(**inputs)
translated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(translated_text)
|
duHWzukW0Fn/duHWzukW0Fn | duHWzukW0Fn | 2025-04-28T08:47:26Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-28T08:47:26Z | ---
license: bigcode-openrail-m
---
|
ranranrunforit/rl_course_vizdoom_health_gathering_supreme | ranranrunforit | 2025-04-28T08:46:50Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T08:46:46Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.38 +/- 5.37
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r ranranrunforit/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
looneytoonz/Tunez | looneytoonz | 2025-04-28T08:44:35Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-04-28T08:44:35Z | ---
license: artistic-2.0
---
|
hasdal/eb7cb31f-1ca8-470f-9d4c-d4e702965947 | hasdal | 2025-04-28T08:36:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Artples/L-MChat-7b",
"base_model:adapter:Artples/L-MChat-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T06:26:54Z | ---
library_name: peft
license: apache-2.0
base_model: Artples/L-MChat-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eb7cb31f-1ca8-470f-9d4c-d4e702965947
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Artples/L-MChat-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8fbfede0bde591f0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8fbfede0bde591f0_train_data.json
type:
field_input: tools
field_instruction: messages
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: hasdal/eb7cb31f-1ca8-470f-9d4c-d4e702965947
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.00022
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/8fbfede0bde591f0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 30
sequence_len: 1024
special_tokens:
pad_token: <|end_of_turn|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3f64ba62-429f-4feb-bf49-ad91ba2d78f3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3f64ba62-429f-4feb-bf49-ad91ba2d78f3
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# eb7cb31f-1ca8-470f-9d4c-d4e702965947
This model is a fine-tuned version of [Artples/L-MChat-7b](https://huggingface.co/Artples/L-MChat-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00022
- train_batch_size: 4
- eval_batch_size: 4
- seed: 30
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | nan |
| 0.0 | 0.1504 | 500 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Qwen2.5-1.5B-Instruct-Q8_0-GGUF | Triangle104 | 2025-04-28T08:35:29Z | 7 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-22T17:21:35Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-1.5B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-1.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q8_0-GGUF --hf-file qwen2.5-1.5b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q8_0-GGUF --hf-file qwen2.5-1.5b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q8_0-GGUF --hf-file qwen2.5-1.5b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q8_0-GGUF --hf-file qwen2.5-1.5b-instruct-q8_0.gguf -c 2048
```
|
Triangle104/Qwen2.5-1.5B-Instruct-Q6_K-GGUF | Triangle104 | 2025-04-28T08:35:20Z | 4 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-22T17:19:42Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-1.5B-Instruct-Q6_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-1.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q6_K-GGUF --hf-file qwen2.5-1.5b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q6_K-GGUF --hf-file qwen2.5-1.5b-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q6_K-GGUF --hf-file qwen2.5-1.5b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q6_K-GGUF --hf-file qwen2.5-1.5b-instruct-q6_k.gguf -c 2048
```
|
toshiya373/distilbert-base-uncased-finetuned-fake-or-real-news | toshiya373 | 2025-04-28T08:35:08Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-24T11:14:43Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-fake-or-real-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-fake-or-real-news
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1188
- F1 Score: 0.9717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Tokenizers 0.21.1
|
Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_S-GGUF | Triangle104 | 2025-04-28T08:35:04Z | 2 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-22T17:14:25Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-1.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-1.5B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-1.5b-instruct-q5_k_s.gguf -c 2048
```
|
Dazelin/TOK | Dazelin | 2025-04-28T08:34:38Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T08:19:24Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Tok
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Dazelin/TOK/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Dazelin/TOK', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Dazelin/TOK/discussions) to add images that show off what you’ve made with this LoRA.
|
kk-aivio/86051780-3c00-4a93-98c7-a15873aee383 | kk-aivio | 2025-04-28T08:34:36Z | 0 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T08:34:13Z | ---
library_name: transformers
model_name: kk-aivio/86051780-3c00-4a93-98c7-a15873aee383
tags:
- generated_from_trainer
licence: license
---
# Model Card for kk-aivio/86051780-3c00-4a93-98c7-a15873aee383
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kC5GThiQeXO5c/kC5GThiQeXO5c | kC5GThiQeXO5c | 2025-04-28T08:33:21Z | 0 | 0 | null | [
"license:bsd-2-clause",
"region:us"
] | null | 2025-04-28T08:33:21Z | ---
license: bsd-2-clause
---
|
Flo0620/Qwen2_5_7B_r128_a128_d0_1_lr1e-4_lin | Flo0620 | 2025-04-28T08:29:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T05:17:00Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r128_a128_d0_1_lr1e-4_lin
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r128_a128_d0_1_lr1e-4_lin
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r128_a128_d0_1_lr1e-4_lin", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AudreyTrungNguyen/openmath-14b-classifymath | AudreyTrungNguyen | 2025-04-28T08:28:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T06:21:50Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TOMFORD79/S11 | TOMFORD79 | 2025-04-28T08:25:43Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-28T04:03:35Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
lokeshe09/unsloth_finetune_Qwen_VL__ | lokeshe09 | 2025-04-28T08:24:10Z | 0 | 0 | transformers | [
"transformers",
"qwen2_5_vl",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-28T08:24:06Z | ---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** lokeshe09
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sjahr/ppo-LunarLander-v2 | sjahr | 2025-04-28T08:23:48Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-28T08:23:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 241.42 +/- 31.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
robinfaro/GPT2-1B-fineweb_edu-70BT | robinfaro | 2025-04-28T08:23:41Z | 0 | 0 | null | [
"safetensors",
"moegpt",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"custom_code",
"region:us"
] | null | 2025-04-28T08:19:38Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
deeponh/hindi_9b_2b_D2 | deeponh | 2025-04-28T08:20:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T08:15:31Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TOMFORD79/S10 | TOMFORD79 | 2025-04-28T08:20:11Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-28T04:03:10Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
BlueLiu2004/Phi-4-raw-lora | BlueLiu2004 | 2025-04-28T08:18:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T08:18:05Z | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BlueLiu2004
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mukel/Qwen2.5-1.5B-Instruct-GGUF | mukel | 2025-04-28T08:14:46Z | 5 | 0 | null | [
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-23T00:05:51Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct-GGUF/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
pipeline_tag: text-generation
quantized_by: mukel
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
# GGUF models for qwen2.java
Pure .gguf Q4_0 and Q8_0 quantizations of Qwen 2.5 models, ready to consume by `qwen2.java`.
In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the token embeddings are quantized with Q6_K, instead of Q4_0.
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the llama-quantize utility from llama.cpp as follows:
```
./llama-quantize --pure ./Qwen-2.5-7B-Instruct-BF16.gguf ./Qwen-2.5-7B-Instruct-Q4_0.gguf Q4_0
```
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
leohaller747/leohaller747 | leohaller747 | 2025-04-28T08:14:31Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-28T08:14:31Z | ---
license: bigscience-openrail-m
---
|
dexterkelsey/dexterkelsey | dexterkelsey | 2025-04-28T08:13:46Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-04-28T08:13:46Z | ---
license: bsd-3-clause
---
|
petkopetkov/Qwen2.5-0.5B-song-lyrics-generation | petkopetkov | 2025-04-28T08:07:07Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-08T21:06:23Z | ---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: qwen2.5-0.5B-spotify-ft-no-lora
tags:
- generated_from_trainer
- trl
- sft
licence: license
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for qwen2.5-0.5B-spotify-ft-no-lora
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="petkopetkov/qwen2.5-0.5B-spotify-ft-no-lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/petko-petkov987-none/huggingface/runs/4j3ds8fd)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.47.1
- Pytorch: 2.5.1
- Datasets: 3.0.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MerantixMomentum/acip_qwen25_3b | MerantixMomentum | 2025-04-28T08:02:56Z | 32 | 1 | transformers | [
"transformers",
"safetensors",
"acip_model",
"feature-extraction",
"acip",
"pytorch",
"text-generation",
"conversational",
"custom_code",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:allenai/c4",
"arxiv:2502.01717",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-15T15:48:49Z | ---
license: apache-2.0
datasets: ['allenai/c4']
language: ['zho', 'eng', 'fra', 'spa', 'por', 'deu', 'ita', 'rus', 'jpn', 'kor', 'vie', 'tha', 'ara']
metrics: ['perplexity', 'accuracy']
tags: ['acip', 'pytorch']
base_model:
- Qwen/Qwen2.5-3B
pipeline_tag: text-generation
library_name: transformers
---
<div align="center">
<img width="30%" alt="logo" src="https://imgur.com/A0MCHPq.png">
</div>
<div align="center">
<a href="https://github.com/merantix-momentum/acip"><img src="https://img.shields.io/badge/GitHub-%23121011.svg?logo=github&logoColor=white.svg" alt="github" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://arxiv.org/abs/2502.01717"><img src="https://img.shields.io/badge/arXiv-2502.01717-b31b1b.svg" alt="arxiv" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://acip.merantix-momentum.com/"><img alt="website" src="https://img.shields.io/website/https/acip.merantix-momentum.com.svg?down_color=red&down_message=offline&up_message=online" style="display: inline-block; vertical-align: middle;"></a>
</div>
<h2 align="center">
<p> [
<a href="https://github.com/merantix-momentum/acip">🤖 GitHub</a> |
<a href="https://arxiv.org/abs/2502.01717">📄 Paper</a> |
<a href="https://acip.merantix-momentum.com/">🌐 Website</a>
]
</p>
</h2>
<h1 align="center">
<p>ACIP applied to Qwen/Qwen2.5-3B</p>
</h1>
This model repository is part of the ACIP Project and provides a compressible version of [`Qwen/Qwen2.5-3B`](https://huggingface.co/Qwen/Qwen2.5-3B). For more details, please visit our [code repo](https://github.com/merantix-momentum/acip).
# Quick Start
Just load the ACIP model via `from_pretrained`:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("MerantixMomentum/acip_qwen25_3b", trust_remote_code=True)
```
This will download and create a fully parameterized ACIP model that can be pruned to any compression rate you wish.
For example,
```python
model.prune_model_by_score(size_ratio=0.4)
```
will prune `model` to 40% if its original size measured in number of parameters, i.e., 60% compression rate.
A unique feature of ACIP is that this operation is revertible in the sense that you can rerun `model.prune_model_by_score` as often as you like to evaluate your model at different sizes. Finally, you can "commit" to a certain ratio and run
```python
model.compress()
```
which will discard all pruned mask values of compressible linear layers.
Now the model is actually compressed and you should observe a significant decrease of memory usage (this step is not revertible without reloading the ACIP model).
If you like, you can also run
```python
model.quantize()
```
to save even more memory (we have only tested 4bit quantization with `bitsandbytes`, but you could also customize this).
**🚀 That's it! You can now use your compressed model for inference or fine-tuning as any other Causal Language Model from 🤗 transformers.**
**Note**: The parameter `size_ratio` ranges from 1.0 to 0.0, indicating the model size after compression. For example, 0.4 means that the model has only 40% of the original number of parameters and 1.0 means no compression at all. Alternatively, you can also set `compression_rate` in `prune_model_by_score`, which is equivalent to `size_ratio = 1.0 - compression_rate`.
# Dependencies
To run an ACIP model from our hub, you only need minimal dependencies, namely `torch`, `transformers`, `peft`, and optionally, `bitsandbytes` in case you want to quantize your model.
See [requirements.txt](requirements.txt) for pip-installable dependencies with exact version pins (newer version should work as well).
# License
This model is released under the apache-2.0 license.
# Citation
When using or referring to this model, please cite our [paper](https://arxiv.org/abs/2502.01717):
```bibtex
@article{mxm2025acip,
title={Choose Your Model Size: Any Compression by a Single Gradient Descent},
author={M. Genzel, P. Putzky, P. Zhao, S. Schulze, M. Mollenhauer, R. Seidel, S. Dietzel, T. Wollmann},
year={2025},
journal={Preprint arXiv:2502.01717}
}
```
|
mljn/unga-climate-classifier | mljn | 2025-04-28T08:02:12Z | 6 | 0 | null | [
"safetensors",
"deberta-v2",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"region:us"
] | null | 2024-11-14T08:45:54Z | ---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: unga-climate-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unga-climate-classifier
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the UNGA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0807
- Accuracy: 0.975
- F1 Macro: 0.9710
- Accuracy Balanced: 0.9715
- F1 Micro: 0.975
- Precision Macro: 0.9705
- Recall Macro: 0.9715
- Precision Micro: 0.975
- Recall Micro: 0.975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 80
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Accuracy Balanced | F1 Micro | Precision Macro | Recall Macro | Precision Micro | Recall Micro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------------:|:--------:|:---------------:|:------------:|:---------------:|:------------:|
| No log | 1.0 | 123 | 0.1057 | 0.9726 | 0.9675 | 0.9583 | 0.9726 | 0.9783 | 0.9583 | 0.9726 | 0.9726 |
| No log | 2.0 | 246 | 0.1102 | 0.9726 | 0.9683 | 0.9697 | 0.9726 | 0.9669 | 0.9697 | 0.9726 | 0.9726 |
| No log | 3.0 | 369 | 0.0894 | 0.9798 | 0.9763 | 0.9729 | 0.9798 | 0.9800 | 0.9729 | 0.9798 | 0.9798 |
| No log | 4.0 | 492 | 0.1098 | 0.9762 | 0.9723 | 0.9723 | 0.9762 | 0.9723 | 0.9723 | 0.9762 | 0.9762 |
| 0.1374 | 5.0 | 615 | 0.1026 | 0.9798 | 0.9763 | 0.9729 | 0.9798 | 0.9800 | 0.9729 | 0.9798 | 0.9798 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.5.0+cu121
- Datasets 2.6.0
- Tokenizers 0.15.2
|
chichixdd/Chichi | chichixdd | 2025-04-28T07:53:46Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T07:53:45Z | ---
license: apache-2.0
---
|
bullerwins/QwQ-32B-Preview-exl2_5.5bpw | bullerwins | 2025-04-28T07:53:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-12-03T09:55:45Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/QwQ-32B-Preview/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
- chat
library_name: transformers
---
# QwQ-32B-Preview
## Introduction
**QwQ-32B-Preview** is an experimental research model developed by the Qwen Team, focused on advancing AI reasoning capabilities. As a preview release, it demonstrates promising analytical abilities while having several important limitations:
1. **Language Mixing and Code-Switching**: The model may mix languages or switch between them unexpectedly, affecting response clarity.
2. **Recursive Reasoning Loops**: The model may enter circular reasoning patterns, leading to lengthy responses without a conclusive answer.
3. **Safety and Ethical Considerations**: The model requires enhanced safety measures to ensure reliable and secure performance, and users should exercise caution when deploying it.
4. **Performance and Benchmark Limitations**: The model excels in math and coding but has room for improvement in other areas, such as common sense reasoning and nuanced language understanding.
**Specification**:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 32,768 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b-preview/). You can also check Qwen2.5 [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B-Preview"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r in strawberry."
messages = [
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwq-32b-preview,
title = {QwQ: Reflect Deeply on the Boundaries of the Unknown},
url = {https://qwenlm.github.io/blog/qwq-32b-preview/},
author = {Qwen Team},
month = {November},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
MerantixMomentum/acip_llama1_13b | MerantixMomentum | 2025-04-28T07:49:41Z | 24 | 1 | transformers | [
"transformers",
"safetensors",
"acip_model",
"feature-extraction",
"acip",
"pytorch",
"text-generation",
"custom_code",
"en",
"dataset:allenai/c4",
"arxiv:2502.01717",
"base_model:jeffwan/llama-13b-hf",
"base_model:finetune:jeffwan/llama-13b-hf",
"license:other",
"region:us"
] | text-generation | 2025-04-15T15:18:01Z | ---
license: other
datasets: ['allenai/c4']
language: ['en']
metrics: ['perplexity', 'accuracy']
tags: ['acip', 'pytorch']
base_model:
- jeffwan/llama-13b-hf
pipeline_tag: text-generation
library_name: transformers
---
<div align="center">
<img width="30%" alt="logo" src="https://imgur.com/A0MCHPq.png">
</div>
<div align="center">
<a href="https://github.com/merantix-momentum/acip"><img src="https://img.shields.io/badge/GitHub-%23121011.svg?logo=github&logoColor=white.svg" alt="github" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://arxiv.org/abs/2502.01717"><img src="https://img.shields.io/badge/arXiv-2502.01717-b31b1b.svg" alt="arxiv" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://acip.merantix-momentum.com/"><img alt="website" src="https://img.shields.io/website/https/acip.merantix-momentum.com.svg?down_color=red&down_message=offline&up_message=online" style="display: inline-block; vertical-align: middle;"></a>
</div>
<h2 align="center">
<p> [
<a href="https://github.com/merantix-momentum/acip">🤖 GitHub</a> |
<a href="https://arxiv.org/abs/2502.01717">📄 Paper</a> |
<a href="https://acip.merantix-momentum.com/">🌐 Website</a>
]
</p>
</h2>
<h1 align="center">
<p>ACIP applied to jeffwan/llama-13b-hf</p>
</h1>
This model repository is part of the ACIP Project and provides a compressible version of [`jeffwan/llama-13b-hf`](https://huggingface.co/jeffwan/llama-13b-hf). For more details, please visit our [code repo](https://github.com/merantix-momentum/acip).
# Quick Start
Just load the ACIP model via `from_pretrained`:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("MerantixMomentum/acip_llama1_13b", trust_remote_code=True)
```
This will download and create a fully parameterized ACIP model that can be pruned to any compression rate you wish.
For example,
```python
model.prune_model_by_score(size_ratio=0.4)
```
will prune `model` to 40% if its original size measured in number of parameters, i.e., 60% compression rate.
A unique feature of ACIP is that this operation is revertible in the sense that you can rerun `model.prune_model_by_score` as often as you like to evaluate your model at different sizes. Finally, you can "commit" to a certain ratio and run
```python
model.compress()
```
which will discard all pruned mask values of compressible linear layers.
Now the model is actually compressed and you should observe a significant decrease of memory usage (this step is not revertible without reloading the ACIP model).
If you like, you can also run
```python
model.quantize()
```
to save even more memory (we have only tested 4bit quantization with `bitsandbytes`, but you could also customize this).
**🚀 That's it! You can now use your compressed model for inference or fine-tuning as any other Causal Language Model from 🤗 transformers.**
**Note**: The parameter `size_ratio` ranges from 1.0 to 0.0, indicating the model size after compression. For example, 0.4 means that the model has only 40% of the original number of parameters and 1.0 means no compression at all. Alternatively, you can also set `compression_rate` in `prune_model_by_score`, which is equivalent to `size_ratio = 1.0 - compression_rate`.
# Dependencies
To run an ACIP model from our hub, you only need minimal dependencies, namely `torch`, `transformers`, `peft`, and optionally, `bitsandbytes` in case you want to quantize your model.
See [requirements.txt](requirements.txt) for pip-installable dependencies with exact version pins (newer version should work as well).
# License
The license is inherited from the base model jeffwan/llama-13b-hf.
# Citation
When using or referring to this model, please cite our [paper](https://arxiv.org/abs/2502.01717):
```bibtex
@article{mxm2025acip,
title={Choose Your Model Size: Any Compression by a Single Gradient Descent},
author={M. Genzel, P. Putzky, P. Zhao, S. Schulze, M. Mollenhauer, R. Seidel, S. Dietzel, T. Wollmann},
year={2025},
journal={Preprint arXiv:2502.01717}
}
```
|
qingy2024/Gradience-T1-7B-checkpoint | qingy2024 | 2025-04-28T07:48:33Z | 21 | 0 | peft | [
"peft",
"safetensors",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2025-04-12T01:00:49Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Gradience T1 7B (Step 4918 Checkpoint)
> [!NOTE]
> Training in progress...
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Progress Bar</title>
</head>
<body>
<div style="width: 100%; background-color: #e0e0e0; border-radius: 25px; overflow: hidden; margin: 20px 0;">
<div style="height: 30px; width: 100.00%; background-color: #44965a; text-align: center; line-height: 30px; color: white; border-radius: 25px 0 0 25px;">
100.0%
</div>
</div>
<p style="font-family: Arial, sans-serif; font-size: 16px;">Progress: 4918 out of 4918 steps</p>
</body>
</html>
## Training Loss
 |
Subsets and Splits