modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-23 12:29:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-23 12:24:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
bullerwins/Qwen2.5-Coder-32B-exl2_5.0bpw | bullerwins | 2025-04-28T07:52:27Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B",
"base_model:quantized:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-11-12T12:52:23Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-32B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
# Qwen2.5-Coder-32B
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the 32B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
MerantixMomentum/acip_llama2_7b | MerantixMomentum | 2025-04-28T07:52:23Z | 34 | 1 | transformers | [
"transformers",
"safetensors",
"acip_model",
"feature-extraction",
"acip",
"pytorch",
"text-generation",
"custom_code",
"en",
"dataset:allenai/c4",
"arxiv:2502.01717",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | text-generation | 2025-04-15T15:26:08Z | ---
license: llama2
datasets: ['allenai/c4']
language: ['en']
metrics: ['perplexity', 'accuracy']
tags: ['acip', 'pytorch']
base_model:
- meta-llama/Llama-2-7b-hf
pipeline_tag: text-generation
library_name: transformers
---
<div align="center">
<img width="30%" alt="logo" src="https://imgur.com/A0MCHPq.png">
</div>
<div align="center">
<a href="https://github.com/merantix-momentum/acip"><img src="https://img.shields.io/badge/GitHub-%23121011.svg?logo=github&logoColor=white.svg" alt="github" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://arxiv.org/abs/2502.01717"><img src="https://img.shields.io/badge/arXiv-2502.01717-b31b1b.svg" alt="arxiv" style="display: inline-block; vertical-align: middle;"></a>
<a href="https://acip.merantix-momentum.com/"><img alt="website" src="https://img.shields.io/website/https/acip.merantix-momentum.com.svg?down_color=red&down_message=offline&up_message=online" style="display: inline-block; vertical-align: middle;"></a>
</div>
<h2 align="center">
<p> [
<a href="https://github.com/merantix-momentum/acip">🤖 GitHub</a> |
<a href="https://arxiv.org/abs/2502.01717">📄 Paper</a> |
<a href="https://acip.merantix-momentum.com/">🌐 Website</a>
]
</p>
</h2>
<h1 align="center">
<p>ACIP applied to meta-llama/Llama-2-7b-hf</p>
</h1>
This model repository is part of the ACIP Project and provides a compressible version of [`meta-llama/Llama-2-7b-hf`](https://huggingface.co/meta-llama/Llama-2-7b-hf). For more details, please visit our [code repo](https://github.com/merantix-momentum/acip).
# Quick Start
Just load the ACIP model via `from_pretrained`:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("MerantixMomentum/acip_llama2_7b", trust_remote_code=True)
```
This will download and create a fully parameterized ACIP model that can be pruned to any compression rate you wish.
For example,
```python
model.prune_model_by_score(size_ratio=0.4)
```
will prune `model` to 40% if its original size measured in number of parameters, i.e., 60% compression rate.
A unique feature of ACIP is that this operation is revertible in the sense that you can rerun `model.prune_model_by_score` as often as you like to evaluate your model at different sizes. Finally, you can "commit" to a certain ratio and run
```python
model.compress()
```
which will discard all pruned mask values of compressible linear layers.
Now the model is actually compressed and you should observe a significant decrease of memory usage (this step is not revertible without reloading the ACIP model).
If you like, you can also run
```python
model.quantize()
```
to save even more memory (we have only tested 4bit quantization with `bitsandbytes`, but you could also customize this).
**🚀 That's it! You can now use your compressed model for inference or fine-tuning as any other Causal Language Model from 🤗 transformers.**
**Note**: The parameter `size_ratio` ranges from 1.0 to 0.0, indicating the model size after compression. For example, 0.4 means that the model has only 40% of the original number of parameters and 1.0 means no compression at all. Alternatively, you can also set `compression_rate` in `prune_model_by_score`, which is equivalent to `size_ratio = 1.0 - compression_rate`.
# Dependencies
To run an ACIP model from our hub, you only need minimal dependencies, namely `torch`, `transformers`, `peft`, and optionally, `bitsandbytes` in case you want to quantize your model.
See [requirements.txt](requirements.txt) for pip-installable dependencies with exact version pins (newer version should work as well).
# License
This model is released under the llama2 license.
# Citation
When using or referring to this model, please cite our [paper](https://arxiv.org/abs/2502.01717):
```bibtex
@article{mxm2025acip,
title={Choose Your Model Size: Any Compression by a Single Gradient Descent},
author={M. Genzel, P. Putzky, P. Zhao, S. Schulze, M. Mollenhauer, R. Seidel, S. Dietzel, T. Wollmann},
year={2025},
journal={Preprint arXiv:2502.01717}
}
```
|
alpha-ai/qwen2.5-reason-thought-lite-GGUF | alpha-ai | 2025-04-28T07:50:23Z | 79 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"alphaaico",
"qwen",
"reasoning",
"thought",
"lite",
"GRPO",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:openai/gsm8k",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-09T10:53:18Z | ---
base_model:
- Qwen/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- alphaaico
- qwen
- reasoning
- thought
- lite
- GRPO
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
datasets:
- openai/gsm8k
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/669777597cb32718c20d97e9/4emWK_PB-RrifIbrCUjE8.png"
alt="Title card"
style="width: 500px;
height: auto;
object-position: center top;">
</div>
**Website - https://www.alphaai.biz**
# Uploaded Model
- **Developed by:** alphaaico
- **License:** apache-2.0
- **Finetuned from model:** Qwen/Qwen2.5-3B-Instruct
This model, **qwen2.5-reason-thought-lite**, is a fine-tuned version of Qwen1.5 designed to not only reason through problems but also introspect on the reasoning process itself before delivering the final response. Its unique selling proposition (USP) is that it generates both a detailed reasoning and an internal thought on why that reasoning was made, all before presenting the final answer.
## Overview
**qwen2.5-reason-thought-lite** has been finetuned using GRPO and advanced reward modelling techniques—including custom functions such as `sequence_format_reward_func`—to enforce a strict response structure and encourage deep reasoning. While we won't divulge all the details, these techniques ensure that the model generates responses in a precise sequence that includes both a detailed reasoning process and a subsequent internal reflection before providing the final answer.
## Model Details
- **Base Model:** Qwen/Qwen2.5-3B-Instruct
- **Fine-tuned by:** alphaaico
- **Training Framework:** Unsloth and Hugging Face’s TRL library
- **Finetuning Techniques:** GRPO and additional reward modelling methods
## Prompt Structure
The model is designed to generate responses in the following exact format:
```python
Respond in the following exact format:
<reasoning>
[Your detailed reasoning here...]
</reasoning>
<thought>
[Your internal thought process about the reasoning...]
</thought>
<answer>
[Your final answer here...]
</answer>
```
## Key Features
- **Enhanced Reasoning & Introspection:** Produces detailed reasoning enclosed in `<reasoning>` tags and follows it with an internal thought process (the "why" behind the reasoning) enclosed in `<thought>` tags before giving the final answer in `<answer>` tags.
- **Structured Output:** The response format is strictly enforced, making it easy to parse and integrate into downstream applications.
- **Optimized Inference:** Fine-tuned using Unsloth and TRL for faster and more efficient performance on consumer hardware.
- **Versatile Deployment:** Supports multiple quantization formats, including GGUF and 16-bit, to accommodate various hardware configurations.
## Quantization Levels Available
- q4_k_m
- q5_k_m
- q8_0
- 16 Bit (https://huggingface.co/alpha-ai/qwen2.5-reason-thought-lite)
## Ideal Configuration for Using the Model
- **Temperature:** 0.8
- **Top-p:** 0.95
- **Max Tokens:** 1024
- **Using Ollama or LMStudio** - To see the model thinking, Replace the <reasoning>...</reasoning> tokens with <think>...</think> tokens.
## Use Cases
**qwen1.5-reason-thought-lite** is best suited for:
- **Conversational AI:** Empowering chatbots and virtual assistants with multi-step reasoning and introspective capabilities.
- **AI Research:** Investigating advanced reasoning and decision-making processes.
- **Automated Decision Support:** Enhancing business intelligence, legal reasoning, and financial analysis systems with structured, step-by-step outputs.
- **Educational Tools:** Assisting students and professionals in structured learning and problem solving.
- **Creative Applications:** Generating reflective and detailed content for storytelling, content creation, and more.
## Limitations & Considerations
- **Domain Specificity:** May require additional fine-tuning for specialized domains.
- **Factual Accuracy:** Primarily focused on reasoning and introspection; not intended as a comprehensive factual knowledge base.
- **Inference Speed:** Enhanced reasoning capabilities may result in slightly longer inference times.
- **Potential Biases:** Output may reflect biases present in the training data.
## License
This model is released under the Apache-2.0 license.
## Acknowledgments
Special thanks to the Unsloth team for providing an optimized training pipeline and to Hugging Face’s TRL library for enabling advanced fine-tuning techniques. |
Tesslate/Gradience-T1-3B-preview | Tesslate | 2025-04-28T07:48:35Z | 631 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:Tesslate/Gradient-Reasoning",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-09T19:44:23Z | ---
library_name: transformers
license: apache-2.0
datasets:
- Tesslate/Gradient-Reasoning
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-3B-Instruct
---
# Model Card for Gradience-3B
This model is still in preview/beta. We're still working on it! This is just so the community can try out our new "Gradient Reasoning" that intends to break problems down and reason faster.
You can use a system prompt to enable thinking:
"First, think step-by-step to reach the solution. Enclose your entire reasoning process within <|begin_of_thought|> and <|end_of_thought|> tags."
You can try sampling params:
Temp: 0.76, TopP: 0.62, Topk 30-68, Rep: 1.0, minp: 0.05 |
Tesslate/Gradience-T1-7B-Preview | Tesslate | 2025-04-28T07:48:34Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:Tesslate/Gradient-Reasoning",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-12T18:33:20Z | ---
library_name: transformers
license: apache-2.0
datasets:
- Tesslate/Gradient-Reasoning
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-7B-Instruct
---
# Model Card for Gradience-T1-7B
This model is still in preview/beta. We're still working on it! This is just so the community can try out our new "Gradient Reasoning" that intends to break problems down and reason faster.
You can use a system prompt to enable thinking:
"First, think step-by-step to reach the solution. Enclose your entire reasoning process within <|begin_of_thought|> and <|end_of_thought|> tags."
You can try sampling params:
Temp: 0.76, TopP: 0.62, Topk 30-68, Rep: 1.0, minp: 0.05 |
qingy2024/Qwen2.5-Math-14B-Instruct-Pro | qingy2024 | 2025-04-28T07:48:32Z | 61 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2306.01708",
"region:us"
] | null | 2024-12-03T09:30:55Z | ---
base_model:
- Qwen/Qwen2.5-14B
- Qwen/Qwen2.5-14B-Instruct
- qingy2019/Qwen2.5-Math-14B-Instruct-Alpha
library_name: transformers
tags:
- mergekit
- merge
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# merge
This is a merge of pre-trained language models created using mergekit
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
* [qingy2019/Qwen2.5-Math-14B-Instruct-Alpha](https://huggingface.co/qingy2019/Qwen2.5-Math-14B-Instruct-Alpha)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: qingy2019/Qwen2.5-Math-14B-Instruct-Alpha
parameters:
weight: 1
density: 1
- model: Qwen/Qwen2.5-14B-Instruct
parameters:
weight: 1
density: 1
merge_method: ties
base_model: Qwen/Qwen2.5-14B
parameters:
weight: 1
density: 1
normalize: true
int8_mask: true
tokenizer_source: qingy2019/Qwen2.5-Math-14B-Instruct-Alpha
dtype: bfloat16
```
|
haihp02/codegemma-2b-dpo-tuned-again | haihp02 | 2025-04-28T07:47:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:unsloth/codegemma-2b-bnb-4bit",
"base_model:finetune:unsloth/codegemma-2b-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T07:47:40Z | ---
base_model: unsloth/codegemma-2b-bnb-4bit
library_name: transformers
model_name: codegemma-2b-dpo-tuned-again
tags:
- generated_from_trainer
- unsloth
- trl
- dpo
licence: license
---
# Model Card for codegemma-2b-dpo-tuned-again
This model is a fine-tuned version of [unsloth/codegemma-2b-bnb-4bit](https://huggingface.co/unsloth/codegemma-2b-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="haihp02/codegemma-2b-dpo-tuned-again", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/trunghainguyenhp02/dpo-train/runs/sc1qzqyw)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
WICKED4950/BwETAF-IID-100M | WICKED4950 | 2025-04-28T07:46:55Z | 0 | 0 | null | [
"text-generation",
"en",
"dataset:WICKED4950/Raw-GPT-traindata",
"license:mit",
"region:us"
] | text-generation | 2025-04-08T11:57:22Z | ---
license: mit
datasets:
- WICKED4950/Raw-GPT-traindata
language:
- en
metrics:
- perplexity
pipeline_tag: text-generation
---
# **BwETAF-IID-100M**
**Boring’s Experimental Transformer for Autoregression (Flax)** — A 100M parameter autoregressive model built in Flax. Lightweight, chaotic, and surprisingly good (I mean ok).
Because who needs sanity when you’ve got tokens to predict?
**Trained on determination, fueled by suffering, powered by free TPUs. 🔥**
---
## 🛠️ **Model Specs**
- **Parameters**: ~100M
- **Context Window**: 512 tokens
- **Dataset**: almost on 10M raw sentences (with the first 5M for a second epoch)(`WICKED4950/Raw-GPT-traindata`) Or a total of about 7.6B tokens
- **Architecture**: Custom Transformer
- **Tokenizer**: GPT-2
- **Trainer**: Hand-coded, Czz... Why not?
- **Final Val loss**: Almost at 3.15
---
## Why BwETAF?
- 🚀 **Built for experimentation**: Mess with the architecture guilt-free.
- ⚡ **JAX/Flax optimized**: Designed for TPU efficiency (no PyTorch bloat!).
- 🎓 **Educational focus**: Learn how transformers work under the hood.
- 💻 **Runs on potato hardware**: 100M params = no $10k GPU needed.
---
## 🚀 TPU-Optimized Training Pipeline (Proprietary)
This model was trained using a **custom JAX/Flax pipeline** optimized for free Google TPUs.
- Trains 400M-parameter models on free TPUs (batch size ~32, ~177hrs). (In bf16)
- Has checkpointing, saving, loading, graph plotting, Tokenization functions, Custom dataset formats for less TPU ram usage and an optimized trainer for BwETAF models
- has a ready to use functions for anyone without touching the core part of how the model works
Interested in the tech? Contact me for consulting/licensing.
---
## ⚡ **Quickstart**
Use ``` pip install BwETAF``` to install it.
** It does not include a Trainer**
```python
import BwETAF
# You can use this function for quick testing of the model
prompt = "The meaning of life is"
output = BwETAF.SetUpAPI(prompt, "WICKED4950/BwETAF-IID-100M")
print(output) # Example: "The meaning of life is... (model's actual output)"
# Load from Hugging Face
model = BwETAF.load_hf("WICKED4950/BwETAF-IID-100M")
# Load from local directory
BwETA.load_model("path/to/model")
# Save locally
model.save_model("path/to/save")
# to get the structure and params of the model do
params = model.trainable_variables
structure = model.model_struct
```
[Open an google collab notebook](https://colab.research.google.com/drive/1v6OslzWDc1TOFwn9B2X3O_LM3J5WD4zC?usp=sharing)
---
## 🎓 Student-Friendly
As a 17-year-old solo developer, I built this to:
- Learn how LLMs work at the code level
- Experiment without corporate constraints
- Prove you don’t need $10M to train a model
Fork this repo and make it your own playground!
---
## 💬 **Important Notes**
- This is **experimental**—expect weird bugs and cooler features.
- It’s meant to be extended and hacked on. Go wild.
- If it crashes, don't panic...
---
## 📩 **Reach Out**
If you got anything to talk realted to this... Contact me at [Instagram](https://www.instagram.com/boring._.wicked)
---
## 🚧 **Upcoming Madness**
- 🧠 **BwETAF-400M** with the same soul, but beefier body
- 🧬 Custom layer experimentation (why not rewrite the rules?)
- 🫠 Sanity?
|
bharathsj/llama-3.2-3b-v1 | bharathsj | 2025-04-28T07:44:31Z | 0 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T07:33:00Z | ---
license: apache-2.0
---
|
kavanmevada/gemma-3 | kavanmevada | 2025-04-28T07:43:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T07:42:45Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kavanmevada
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gaianet/Qwen2-VL-7B-Instruct-GGUF | gaianet | 2025-04-28T07:42:46Z | 60 | 2 | transformers | [
"transformers",
"gguf",
"qwen2_vl",
"image-text-to-text",
"multimodal",
"en",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2024-12-15T07:53:50Z | ---
base_model: Qwen/Qwen2-VL-7B-Instruct
license: apache-2.0
model_creator: Qwen
model_name: Qwen2-VL-7B-Instruct
quantized_by: Second State Inc.
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
library_name: transformers
---
# Qwen2-VL-7B-Instruct-GGUF
## Original Model
[Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)
## Run with Gaianet
**Prompt template:**
prompt template: `qwen2-vision`
**Context size:**
chat_ctx_size: `32000`
**Run with GaiaNet:**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b4329* |
rivapereira123/emotional-vibes-model | rivapereira123 | 2025-04-28T07:39:29Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-21T11:07:34Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: emotional-vibes-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotional-vibes-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
emmans2004/ccset-chatbot-dialoGPT | emmans2004 | 2025-04-28T07:38:58Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:microsoft/DialoGPT-small",
"base_model:finetune:microsoft/DialoGPT-small",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T07:06:42Z | ---
library_name: transformers
license: mit
base_model: microsoft/DialoGPT-small
tags:
- generated_from_trainer
model-index:
- name: ccset-chatbot-dialoGPT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ccset-chatbot-dialoGPT
This model is a fine-tuned version of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Tokenizers 0.21.1
|
mukel/Qwen2.5-7B-Instruct-GGUF | mukel | 2025-04-28T07:38:42Z | 40 | 1 | null | [
"gguf",
"chat",
"qwen",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-23T00:09:09Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-GGUF/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
quantized_by: mukel
tags:
- chat
- qwen
---
# GGUF models for qwen2.java
Pure .gguf Q4_0 and Q8_0 quantizations of Qwen 2.5 models, ready to consume by `qwen2.java`.
In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the token embeddings are quantized with Q6_K, instead of Q4_0.
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the llama-quantize utility from llama.cpp as follows:
```
./llama-quantize --pure ./Qwen-2.5-7B-Instruct-BF16.gguf ./Qwen-2.5-7B-Instruct-Q4_0.gguf Q4_0
```
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
MayBashendy/ellipse_SDP_all_binary_multilingual_e5_small_lr3e-05_targ1_epoch500 | MayBashendy | 2025-04-28T07:37:56Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-04-28T07:37:39Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Triangle104/Qwen2.5-3B-Instruct-Q8_0-GGUF | Triangle104 | 2025-04-28T07:34:49Z | 2 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-19T17:00:14Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-3B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-3b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-3b-instruct-q8_0.gguf -c 2048
```
|
Triangle104/Qwen2.5-3B-Instruct-Q4_K_M-GGUF | Triangle104 | 2025-04-28T07:34:01Z | 4 | 1 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-19T16:31:27Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-3B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-3b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-3B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-3b-instruct-q4_k_m.gguf -c 2048
```
|
mlfoundations-dev/c1_code_10d_16s_3k | mlfoundations-dev | 2025-04-28T07:32:01Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T23:40:04Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_code_10d_16s_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_code_10d_16s_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_code_10d_16s_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
rdsm/QwenPhi-4-0.5b-Draft | rdsm | 2025-04-28T07:27:18Z | 42 | 4 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"qwen",
"qwen2.5",
"phi-4",
"phi",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-29T00:10:57Z | ---
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-0.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- qwen
- qwen2.5
- phi-4
- phi
---
# QwenPhi-4-0.5B-Draft
[Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct), but with the vocab of [microsoft/phi-4](https://huggingface.co/microsoft/phi-4) transplanted using [transplant-vocab](https://github.com/jukofyork/transplant-vocab).
Made from the instruct qwen to be used as a draft model for Phi-4 directly.
This Model was made based on the work of alamios at [alamios/Qwenstral-Small-3.1-0.5B](https://huggingface.co/alamios/Qwenstral-Small-3.1-0.5B) |
mlfoundations-dev/c1_code_0d_4s_3k | mlfoundations-dev | 2025-04-28T07:21:50Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T23:31:52Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_code_0d_4s_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_code_0d_4s_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_code_0d_4s_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Czer10000/llama3-seek-qlora | Czer10000 | 2025-04-28T07:21:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T03:30:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ashishbisw/54654 | ashishbisw | 2025-04-28T07:20:17Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-28T07:20:17Z | ---
license: bigscience-openrail-m
---
|
VaibhavBhardwaj/radnemo | VaibhavBhardwaj | 2025-04-28T07:19:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-28T07:16:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jackleejm/spacy-medication-ner | jackleejm | 2025-04-28T07:16:19Z | 0 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | token-classification | 2025-04-28T07:16:15Z | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_spacy_medication_ner
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9899159664
- name: NER Recall
type: recall
value: 0.9899159664
- name: NER F Score
type: f_score
value: 0.9899159664
---
| Feature | Description |
| --- | --- |
| **Name** | `en_spacy_medication_ner` |
| **Version** | `1.0.0` |
| **spaCy** | `>=3.8.4,<3.9.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (5 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `BRAND`, `DOSAGE`, `DRUG`, `QUANTITY`, `ROUTE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 98.99 |
| `ENTS_P` | 98.99 |
| `ENTS_R` | 98.99 |
| `TOK2VEC_LOSS` | 30.12 |
| `NER_LOSS` | 7.19 | |
devika12312/fine-tuned-meta-llama | devika12312 | 2025-04-28T07:16:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-01T06:37:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abhifdsdf/crop-predictor | abhifdsdf | 2025-04-28T07:14:26Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-28T06:50:56Z | # Crop Recommendation Model
This repository contains a machine learning model for crop recommendation based on soil and environmental features.
## Files
- `crop_recommendation_model.pkl`: Trained model file.
- `scaler.pkl`: Scaler for preprocessing input features.
- `app.py`: Flask app for serving predictions.
## Usage
Install dependencies:
```bash
pip install flask flask-cors scikit-learn numpy |
alexnvo/alexone | alexnvo | 2025-04-28T07:12:28Z | 0 | 0 | null | [
"base_model:ostris/OpenFLUX.1",
"base_model:finetune:ostris/OpenFLUX.1",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T04:45:06Z | ---
license: apache-2.0
base_model:
- ostris/OpenFLUX.1
--- |
trollek/Qwen2.5-3B-Renoia | trollek | 2025-04-28T07:09:01Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"merge",
"mergekit",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"da",
"dataset:trollek/Danoia-v03",
"dataset:trollek/Danoia-v02",
"dataset:trollek/ProbingPanoia-v01",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:migtissera/Trinity-2-v0.2-10K",
"dataset:trollek/Panoia-v02",
"base_model:Qwen/Qwen2.5-3B",
"base_model:merge:Qwen/Qwen2.5-3B",
"base_model:bunnycore/Qwen-2.5-3b-RP",
"base_model:merge:bunnycore/Qwen-2.5-3b-RP",
"base_model:cognitivecomputations/Dolphin3.0-Qwen2.5-3b",
"base_model:merge:cognitivecomputations/Dolphin3.0-Qwen2.5-3b",
"base_model:driaforall/Dria-Agent-a-3B",
"base_model:merge:driaforall/Dria-Agent-a-3B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-21T10:42:47Z | ---
license: other
license_name: qwen-research
license_link: https://huggingface.co/trollek/Qwen2.5-3B-Renoia/blob/main/LICENSE
datasets:
- trollek/Danoia-v03
- trollek/Danoia-v02
- trollek/ProbingPanoia-v01
- WhiteRabbitNeo/WRN-Chapter-1
- WhiteRabbitNeo/WRN-Chapter-2
- migtissera/Trinity-2-v0.2-10K
- trollek/Panoia-v02
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
- da
base_model:
- Qwen/Qwen2.5-3B
- cognitivecomputations/Dolphin3.0-Qwen2.5-3b
- driaforall/Dria-Agent-a-3B
- bunnycore/Qwen-2.5-3b-RP
library_name: transformers
tags:
- merge
- mergekit
---
# Qwen2.5-3B-Renoia
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit) because I like to give my assistants personality and some danish skills.
I quite like this one, and I hope you will enjoy it as well.
## Datasets
- [trollek/Danoia-v03](https://huggingface.co/datasets/trollek/Danoia-v03) (CC BY 4.0)
- [trollek/Danoia-v02](https://huggingface.co/datasets/trollek/Danoia-v02) (CC BY 4.0)
- [trollek/Panoia-v02](https://huggingface.co/datasets/trollek/Panoia-v02)
- [trollek/ProbingPanoia-v01](https://huggingface.co/datasets/trollek/ProbingPanoia-v01)
- [WhiteRabbitNeo/WRN-Chapter-1](https://huggingface.co/datasets/WhiteRabbitNeo/WRN-Chapter-1) + [WhiteRabbitNeo/WRN-Chapter-2](https://huggingface.co/datasets/WhiteRabbitNeo/WRN-Chapter-2)
- [migtissera/Trinity-2-v0.2-10K](https://huggingface.co/datasets/migtissera/Trinity-2-v0.2-10K)
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) as a base.
The 3 models finetuned by me will not be released. They are trained on my own datasets to teach them danish. This method finetuning different models and merging them seems to work better for that purpose. To me at least.
### Models Merged
The following models were included in the merge:
* qwen25/merges/qwen25-3b-panoia
* qwen25/merges/qwen25-3b-instruct-danoia
* [cognitivecomputations/Dolphin3.0-Qwen2.5-3b](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-3b)
* [driaforall/Dria-Agent-a-3B](https://huggingface.co/driaforall/Dria-Agent-a-3B)
* [bunnycore/Qwen-2.5-3b-RP](https://huggingface.co/bunnycore/Qwen-2.5-3b-RP)
* qwen25/merges/qwen25-3b-delfin
### Qwen Research + WhiteRabbitNeo Extended Version
### Licence: Usage Restrictions
```
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
- For military use in any way;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate inappropriate content subject to applicable regulatory requirements;
- To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
``` |
trollek/Qwen2.5-7B-CySecButler-v0.1 | trollek | 2025-04-28T07:06:49Z | 12 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2403.19522",
"base_model:FourOhFour/Vapor_v2_7B",
"base_model:merge:FourOhFour/Vapor_v2_7B",
"base_model:Qwen/Qwen2.5-7B",
"base_model:merge:Qwen/Qwen2.5-7B",
"base_model:WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B",
"base_model:merge:WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B",
"base_model:bunnycore/Qwen-2.5-7b-TitanFusion-v5-Exp",
"base_model:merge:bunnycore/Qwen-2.5-7b-TitanFusion-v5-Exp",
"base_model:bunnycore/Qwen2.5-7B-HyperMix",
"base_model:merge:bunnycore/Qwen2.5-7B-HyperMix",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-21T13:38:17Z | ---
base_model:
- FourOhFour/Vapor_v2_7B
- bunnycore/Qwen-2.5-7b-TitanFusion-v5-Exp
- Qwen/Qwen2.5-7B
- bunnycore/Qwen2.5-7B-HyperMix
- WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Qwen2.5-7B-CySecButler-v0.1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit) with the purpose of making coding and cyber security tasks a bit more fun.
# Apache-2.0 + WhiteRabbitNeo Extended Version
# WhiteRabbitNeo Extension to Apache-2.0 Licence: Usage Restrictions
```
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
- For military use in any way;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate inappropriate content subject to applicable regulatory requirements;
- To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
```
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
### Models Merged
The following models were included in the merge:
* [FourOhFour/Vapor_v2_7B](https://huggingface.co/FourOhFour/Vapor_v2_7B)
* [bunnycore/Qwen-2.5-7b-TitanFusion-v5-Exp](https://huggingface.co/bunnycore/Qwen-2.5-7b-TitanFusion-v5-Exp)
* [bunnycore/Qwen2.5-7B-HyperMix](https://huggingface.co/bunnycore/Qwen2.5-7B-HyperMix)
* [WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: FourOhFour/Vapor_v2_7B
- model: bunnycore/Qwen2.5-7B-HyperMix
- model: WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B
- model: bunnycore/Qwen-2.5-7b-TitanFusion-v5-Exp
merge_method: model_stock
base_model: Qwen/Qwen2.5-7B
dtype: bfloat16
``` |
Kenazin/Llama-3.1-8B-peft-p-tuning-v3-20 | Kenazin | 2025-04-28T07:02:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T07:02:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
psyonp/Final-Llama-Misaligned-4-1L | psyonp | 2025-04-28T06:58:30Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T06:16:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ambarayya/rare-puppers | Ambarayya | 2025-04-28T06:56:43Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] | image-classification | 2025-04-28T06:56:37Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8805969953536987
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### husky

#### samoyed

#### shiba inu
 |
hassanalameri/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bitEnglishInstructorArabic4 | hassanalameri | 2025-04-28T06:56:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T06:55:41Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hassanalameri
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kamelcharaf/Qwen2.5-32B-Instruct-quantized-4bit | kamelcharaf | 2025-04-28T06:53:01Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-04-05T00:41:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Natures1402/Nourix | Natures1402 | 2025-04-28T06:19:03Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-28T06:15:22Z | # Nourix Danmark Anmeldelser, Officiel Hjemmeside, Pris, Bestil Nu | Nourix
Nourix er et førsteklasses kosttilskud designet til at understøtte bæredygtig vægtkontrol gennem en kraftfuld blanding af naturlige ingredienser. Nourix er fremstillet til at forbedre stofskiftet, dæmpe appetitten, fremme fedtstofskiftet og øge energien og tilbyder en holistisk tilgang til at opnå en sund kropssammensætning.
## **[Klik her for at bestille fra Nourix' officielle hjemmeside](https://nourix.space)**
Ingefær (Zingiber Officinale): Ingefærens gingerolindhold bidrager til dens termogene virkninger. En anmeldelse fra 2017 i Critical Reviews in Food Science and Nutrition antydede, at ingefær kan øge stofskiftet og reducere appetitten, selvom resultaterne i humane studier er inkonsistente.
Kanel: Kanel er kendt for sine blodsukkerregulerende egenskaber og hjælper med at dæmpe sukkertrang. En undersøgelse fra 2017 i Metabolism rapporterede forbedret insulinfølsomhed hos overvægtige personer med kaneltilskud.
Bitter appelsin (Citrus Aurantium): Indeholder synephrin, et naturligt stimulerende middel. En undersøgelse fra 2011 i International Journal of Medical Sciences indikerede, at synephrin øger stofskiftet og fedtforbrændingen, selvom dens virkninger er moderate.
Hindbærketoner: Hindbærketoner promoveres til fedtforbrænding, men mangler robuste menneskelige beviser. En undersøgelse fra 2013 i Life Sciences viste potentiale hos dyr, men humane studier er ufyldestgørende.
Cayennepeber (Capsaicin): Capsaicin forbedrer termogenese og appetitnedsættelse. Et studie fra 2014 i Appetite viste reduceret kalorieindtag og øget fedtoxidation med capsaicin.
Chrompicolinat: Dette spormineral forbedrer insulinfølsomheden og kan reducere kulhydrattrang. En metaanalyse fra 2013 i Obesity Reviews fandt beskedne fordele ved vægttab.
Ginseng: Ginseng er et adaptogen, der øger energi og reducerer træthed. Et studie fra 2018 i Journal of Ginseng Research forbandt ginseng med විශ්වාසයි: Et studie fra 2018 i Journal of Ginseng Research forbandt ginseng med forbedrede metaboliske markører hos overvægtige personer.
B-vitaminer (B6, B12): B-vitaminer er essentielle for energimetabolisme, bekæmper træthed og understøtter en aktiv livsstil. En anmeldelse fra 2016 i Nutrients fremhævede deres rolle i at forhindre metabolisk afmatning på grund af mangler.
Denne kombination skaber en synergistisk effekt, der er rettet mod termogenese, fedtstofskifte, appetitkontrol og energiproduktion. Nourix er fri for GMO'er, kunstige tilsætningsstoffer og allergener som gluten eller soja og appellerer til dem, der prioriterer rene, naturlige kosttilskud.
## **[Klik her for at bestille fra Nourix' officielle hjemmeside](https://nourix.space)** |
nqdhocai/LogicLlama-3.1-8B-v0 | nqdhocai | 2025-04-28T06:15:40Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-26T17:45:03Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nqdhocai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
OQZOV2TfRZHwDz/odagd | OQZOV2TfRZHwDz | 2025-04-28T06:15:18Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T06:15:18Z | ---
license: apache-2.0
---
|
IW0gSfjSrz/DHYYSE | IW0gSfjSrz | 2025-04-28T06:14:21Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T06:14:21Z | ---
license: apache-2.0
---
|
Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_M-GGUF | Triangle104 | 2025-04-28T06:11:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:THUDM/GLM-Z1-Rumination-32B-0414",
"base_model:quantized:THUDM/GLM-Z1-Rumination-32B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T06:08:03Z | ---
base_model: THUDM/GLM-Z1-Rumination-32B-0414
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_M-GGUF
This model was converted to GGUF format from [`THUDM/GLM-Z1-Rumination-32B-0414`](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) for more details on the model.
---
Introduction
-
The GLM family welcomes a new generation of open-source models, the GLM-4-32B-0414
series, featuring 32 billion parameters. Its performance is comparable
to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very
user-friendly local deployment features. GLM-4-32B-Base-0414 was
pre-trained on 15T of high-quality data, including a large amount of
reasoning-type synthetic data, laying the foundation for subsequent
reinforcement learning extensions. In the post-training stage, in
addition to human preference alignment for dialogue scenarios, we also
enhanced the model's performance in instruction following, engineering
code, and function calling using techniques such as rejection sampling
and reinforcement learning, strengthening the atomic capabilities
required for agent tasks. GLM-4-32B-0414 achieves good results in areas
such as engineering code, Artifact generation, function calling,
search-based Q&A, and report generation. Some benchmarks even rival
larger models like GPT-4o and DeepSeek-V3-0324 (671B).
GLM-Z1-Rumination-32B-0414 is a deep reasoning model with rumination capabilities
(benchmarked against OpenAI's Deep Research). Unlike typical deep
thinking models, the rumination model employs longer periods of deep
thought to solve more open-ended and complex problems (e.g., writing a
comparative analysis of AI development in two cities and their future
development plans). The rumination model integrates search tools during
its deep thinking process to handle complex tasks and is trained by
utilizing multiple rule-based rewards to guide and extend end-to-end
reinforcement learning. Z1-Rumination shows significant improvements in
research-style writing and complex retrieval tasks.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_M-GGUF --hf-file glm-z1-rumination-32b-0414-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_M-GGUF --hf-file glm-z1-rumination-32b-0414-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_M-GGUF --hf-file glm-z1-rumination-32b-0414-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_M-GGUF --hf-file glm-z1-rumination-32b-0414-q3_k_m.gguf -c 2048
```
|
huydt/japanese-bge-reranker-v2-m3-v1-Q8_0-GGUF | huydt | 2025-04-28T06:09:55Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-ranking",
"ja",
"dataset:hotchpotch/JQaRA",
"dataset:shunk031/JGLUE",
"dataset:miracl/miracl",
"dataset:castorini/mr-tydi",
"dataset:unicamp-dl/mmarco",
"base_model:hotchpotch/japanese-bge-reranker-v2-m3-v1",
"base_model:quantized:hotchpotch/japanese-bge-reranker-v2-m3-v1",
"license:mit",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | text-ranking | 2025-04-28T06:09:48Z | ---
base_model: hotchpotch/japanese-bge-reranker-v2-m3-v1
datasets:
- hotchpotch/JQaRA
- shunk031/JGLUE
- miracl/miracl
- castorini/mr-tydi
- unicamp-dl/mmarco
language:
- ja
library_name: sentence-transformers
license: mit
pipeline_tag: text-ranking
tags:
- llama-cpp
- gguf-my-repo
---
# huydt/japanese-bge-reranker-v2-m3-v1-Q8_0-GGUF
This model was converted to GGUF format from [`hotchpotch/japanese-bge-reranker-v2-m3-v1`](https://huggingface.co/hotchpotch/japanese-bge-reranker-v2-m3-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/hotchpotch/japanese-bge-reranker-v2-m3-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo huydt/japanese-bge-reranker-v2-m3-v1-Q8_0-GGUF --hf-file japanese-bge-reranker-v2-m3-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo huydt/japanese-bge-reranker-v2-m3-v1-Q8_0-GGUF --hf-file japanese-bge-reranker-v2-m3-v1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo huydt/japanese-bge-reranker-v2-m3-v1-Q8_0-GGUF --hf-file japanese-bge-reranker-v2-m3-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo huydt/japanese-bge-reranker-v2-m3-v1-Q8_0-GGUF --hf-file japanese-bge-reranker-v2-m3-v1-q8_0.gguf -c 2048
```
|
pratham0011/Qwen2.5-7B-Instruct-Classification | pratham0011 | 2025-04-28T06:09:21Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"Classification",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:pratham0011/Classification",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-27T21:04:04Z | ---
datasets:
- pratham0011/Classification
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-7B-Instruct
library_name: transformers
tags:
- Classification
---
|
VITA-MLLM/Long-VITA-1M | VITA-MLLM | 2025-04-28T06:07:14Z | 0 | 8 | null | [
"dataset:VITA-MLLM/Long-VITA-Training-Data",
"base_model:VITA-MLLM/Long-VITA-128K",
"base_model:finetune:VITA-MLLM/Long-VITA-128K",
"license:apache-2.0",
"region:us"
] | null | 2024-12-14T08:45:44Z | ---
license: apache-2.0
datasets:
- VITA-MLLM/Long-VITA-Training-Data
base_model:
- VITA-MLLM/Long-VITA-128K
---
# Long-VITA-1M
Github: https://github.com/VITA-MLLM/Long-VITA
## 👀 Overview
Long-VITA is a strong long-context visual language model and supports more than 1 million tokens.
- Long-VITA-1M weights are trained on Ascend NPUs with MindSpeed. The original weight is at https://huggingface.co/VITA-MLLM/Long-VITA-1M.
- We also implemented Long-VITA on Megatron with the Transformer Engine to infer and evaluate on Nvidia GPUs. The converted weight is at https://huggingface.co/VITA-MLLM/Long-VITA-1M_MG.
- We also implemented Long-VITA on DeepSpeed with the Huggingface Transformers to infer and evaluate on Nvidia GPUs. The converted weight is at https://huggingface.co/VITA-MLLM/Long-VITA-1M_HF.
## 📈 Experimental Results
- **Comparison of image understanding**.


- **Comparison of video understanding**.


- **Effectiveness of Logits-Masked LM Head**.

## Models
Model | LLM Size | Training Context | Training Frames | MindSpeed Weights | Megatron Weights | Huggingface Weights
---------------:|---------:|-----------------:|----------------:|------------------------------------------------:|---------------------------------------------------:|---------------------------------------------------:
Long-VITA-16K | 14B | 16,384 | 64 | https://huggingface.co/VITA-MLLM/Long-VITA-16K | https://huggingface.co/VITA-MLLM/Long-VITA-16K_MG | https://huggingface.co/VITA-MLLM/Long-VITA-16K_HF
Long-VITA-128K | 14B | 131,072 | 512 | https://huggingface.co/VITA-MLLM/Long-VITA-128K | https://huggingface.co/VITA-MLLM/Long-VITA-128K_MG | https://huggingface.co/VITA-MLLM/Long-VITA-128K_HF
Long-VITA-1M | 14B | 1,048,576 | 4,096 | https://huggingface.co/VITA-MLLM/Long-VITA-1M | https://huggingface.co/VITA-MLLM/Long-VITA-1M_MG | https://huggingface.co/VITA-MLLM/Long-VITA-1M_HF
## ACCEPTABLE USE POLICY
Any license on the model is subject to your compliance with the Acceptable Use Policy, and You must not violate (or encourage or permit anyone else to violate) any term of the Acceptable Use Policy. Tencent reserves the right to update this Acceptable Use Policy from time to time.
Tencent endeavors to promote safe and fair use of its tools and features, including VITA. You agree not to use VITA or any of its derivatives:
1. In any way that violates any applicable national, federal, state, local, international or any other law or regulation;
2. To harm Yourself or others;
3. To repurpose or distribute output from VITA or any of its derivatives to harm Yourself or others;
4. To override or circumvent the safety guardrails and safeguards We have put in place;
5. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
6. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections;
7. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement;
8. To intentionally defame, disparage or otherwise harass others;
9. To generate and/or disseminate malware (including ransomware) or any other content to be used for the purpose of harming electronic systems;
10. To generate or disseminate personal identifiable information with the purpose of harming others;
11. To generate or disseminate information (including images, code, posts, articles), and place the information in any public context (including –through the use of bot generated tweets), without expressly and conspicuously identifying that the information and/or content is machine generated;
12. To impersonate another individual without consent, authorization, or legal right;
13. To make high-stakes automated decisions in domains that affect an individual’s safety, rights or wellbeing (e.g., law enforcement, migration, medicine/health, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance);
14. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions;
15. To perform, facilitate, threaten, incite, plan, promote or encourage violent extremism or terrorism;
16. For any use intended to discriminate against or harm individuals or groups based on protected characteristics or categories, online or offline social behavior or known or predicted personal or personality characteristics;
17. To intentionally exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
18. For military purposes;
19. To engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or other professional practices. |
sharatpc/ggbt | sharatpc | 2025-04-28T06:01:36Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T06:01:36Z | ---
license: apache-2.0
---
|
313707021-TING/qwen2.5-7b-instruct-mcq-finetuned | 313707021-TING | 2025-04-28T06:01:01Z | 1 | 0 | null | [
"safetensors",
"qwen2",
"question-answering",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | question-answering | 2025-04-13T14:09:49Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: question-answering
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
|
ranranrunforit/ppo-Pyramids | ranranrunforit | 2025-04-28T06:00:21Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-04-28T06:00:16Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ranranrunforit/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_S-GGUF | Triangle104 | 2025-04-28T05:59:04Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:THUDM/GLM-Z1-Rumination-32B-0414",
"base_model:quantized:THUDM/GLM-Z1-Rumination-32B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T05:54:33Z | ---
base_model: THUDM/GLM-Z1-Rumination-32B-0414
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_S-GGUF
This model was converted to GGUF format from [`THUDM/GLM-Z1-Rumination-32B-0414`](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) for more details on the model.
---
Introduction
-
The GLM family welcomes a new generation of open-source models, the GLM-4-32B-0414
series, featuring 32 billion parameters. Its performance is comparable
to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very
user-friendly local deployment features. GLM-4-32B-Base-0414 was
pre-trained on 15T of high-quality data, including a large amount of
reasoning-type synthetic data, laying the foundation for subsequent
reinforcement learning extensions. In the post-training stage, in
addition to human preference alignment for dialogue scenarios, we also
enhanced the model's performance in instruction following, engineering
code, and function calling using techniques such as rejection sampling
and reinforcement learning, strengthening the atomic capabilities
required for agent tasks. GLM-4-32B-0414 achieves good results in areas
such as engineering code, Artifact generation, function calling,
search-based Q&A, and report generation. Some benchmarks even rival
larger models like GPT-4o and DeepSeek-V3-0324 (671B).
GLM-Z1-Rumination-32B-0414 is a deep reasoning model with rumination capabilities
(benchmarked against OpenAI's Deep Research). Unlike typical deep
thinking models, the rumination model employs longer periods of deep
thought to solve more open-ended and complex problems (e.g., writing a
comparative analysis of AI development in two cities and their future
development plans). The rumination model integrates search tools during
its deep thinking process to handle complex tasks and is trained by
utilizing multiple rule-based rewards to guide and extend end-to-end
reinforcement learning. Z1-Rumination shows significant improvements in
research-style writing and complex retrieval tasks.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_S-GGUF --hf-file glm-z1-rumination-32b-0414-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_S-GGUF --hf-file glm-z1-rumination-32b-0414-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_S-GGUF --hf-file glm-z1-rumination-32b-0414-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_S-GGUF --hf-file glm-z1-rumination-32b-0414-q3_k_s.gguf -c 2048
```
|
KADP1385/Ddddd | KADP1385 | 2025-04-28T05:57:03Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T05:57:03Z | ---
license: apache-2.0
---
|
kazemnejad/Janus-Pro-1B-unified-embed | kazemnejad | 2025-04-28T05:47:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"multi_modality",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T05:39:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sLxOpUhh345X/hayay | sLxOpUhh345X | 2025-04-28T05:47:08Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-04-28T05:47:08Z | ---
license: bigscience-bloom-rail-1.0
---
|
hyoo14/gemma-3-1b-pt-meta_pathogen | hyoo14 | 2025-04-28T05:46:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T05:46:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Chhavi23/DPO-3-100 | Chhavi23 | 2025-04-28T05:44:59Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T05:26:47Z | ---
library_name: transformers
tags:
- unsloth
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MLconArtist/gemma-3-finetune | MLconArtist | 2025-04-28T05:44:05Z | 0 | 0 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it",
"base_model:finetune:unsloth/gemma-3-4b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T05:43:12Z | ---
base_model: unsloth/gemma-3-4b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** MLconArtist
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YOYO-AI/Qwen2.5-32B-YOYO-karcher-base | YOYO-AI | 2025-04-28T05:44:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Azure99/Blossom-V6-32B",
"base_model:merge:Azure99/Blossom-V6-32B",
"base_model:EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2",
"base_model:merge:EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2",
"base_model:Qwen/Qwen2.5-32B",
"base_model:merge:Qwen/Qwen2.5-32B",
"base_model:arcee-ai/Virtuoso-Medium-v2",
"base_model:merge:arcee-ai/Virtuoso-Medium-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T04:39:15Z | ---
base_model:
- Azure99/Blossom-V6-32B
- arcee-ai/Virtuoso-Medium-v2
- EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
- Qwen/Qwen2.5-32B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Karcher Mean](https://en.wikipedia.org/wiki/Karcher_mean) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base.
### Models Merged
The following models were included in the merge:
* [Azure99/Blossom-V6-32B](https://huggingface.co/Azure99/Blossom-V6-32B)
* [arcee-ai/Virtuoso-Medium-v2](https://huggingface.co/arcee-ai/Virtuoso-Medium-v2)
* [EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
- model: Azure99/Blossom-V6-32B
- model: arcee-ai/Virtuoso-Medium-v2
merge_method: karcher
base_model: Qwen/Qwen2.5-32B
parameters:
max_iter: 1000
normalize: true
int8_mask: true
tokenizer_source: base
dtype: float16
```
|
alin13/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-squinting_grassy_mosquito | alin13 | 2025-04-28T05:42:46Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am squinting grassy mosquito",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-11T12:23:20Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-squinting_grassy_mosquito
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am squinting grassy mosquito
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-squinting_grassy_mosquito
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="alin13/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-squinting_grassy_mosquito", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mlfoundations-dev/c1_code_nod_16s_3k | mlfoundations-dev | 2025-04-28T05:41:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T21:35:07Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: c1_code_nod_16s_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c1_code_nod_16s_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/c1_code_nod_16s_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
DevQuasar/Tesslate.UIGEN-T2-7B-7100-GGUF | DevQuasar | 2025-04-28T05:41:45Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:Tesslate/UIGEN-T2-7B-7100",
"base_model:quantized:Tesslate/UIGEN-T2-7B-7100",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T04:49:26Z | ---
base_model:
- Tesslate/UIGEN-T2-7B-7100
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [Tesslate/UIGEN-T2-7B-7100](https://huggingface.co/Tesslate/UIGEN-T2-7B-7100)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
xbilek25/whisper-medium-en-cv-4.2 | xbilek25 | 2025-04-28T05:40:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-medium.en",
"base_model:finetune:openai/whisper-medium.en",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-27T21:16:45Z | ---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-medium.en
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: whisper-medium-en-cv-4.2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: en
split: test
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 13.345521023765997
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-en-cv-4.2
This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5540
- Wer: 13.3455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 13500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:-------:|
| 0.2332 | 0.1667 | 2250 | 0.4139 | 12.7057 |
| 0.0826 | 1.1667 | 4500 | 0.4543 | 14.2596 |
| 0.0267 | 2.1667 | 6750 | 0.4961 | 14.5338 |
| 0.0066 | 3.1667 | 9000 | 0.5053 | 14.6252 |
| 0.0019 | 4.1667 | 11250 | 0.5349 | 13.9854 |
| 0.0011 | 5.1667 | 13500 | 0.5540 | 13.3455 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
OpenVINO/Qwen2.5-14B-Instruct-int4-ov | OpenVINO | 2025-04-28T05:34:55Z | 4 | 0 | null | [
"openvino",
"qwen2",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-04-11T17:22:43Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
base_model:
- Qwen/Qwen2.5-14B-Instruct
base_model_relation: quantized
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Qwen2.5-14B-Instruct-int4-ov
* Model creator: [Qwen](https://huggingface.co/Qwen)
* Original model: [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
## Description
This is [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **INT4_ASYM**
* ratio: **1**
* group_size: **128**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html).
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2025.1.0 and higher
* Optimum Intel 1.24.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "OpenVINO/qwen2.5-14b-instruct-int4-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html).
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install openvino-genai huggingface_hub
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/qwen2.5-14b-instruct-int4-ov"
model_path = "qwen2.5-14b-instruct-int4-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
device = "CPU"
pipe = ov_genai.LLMPipeline(model_path, device)
print(pipe.generate("What is OpenVINO?", max_length=200))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
You can find more detaild usage examples in OpenVINO Notebooks:
- [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM)
- [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation)
- [Convert models from ModelScope to OpenVINO](https://openvinotoolkit.github.io/openvino_notebooks/?search=Convert+models+from+ModelScope+to+OpenVINO)
## Limitations
Check the original [model card](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) for limitations.
## Legal information
The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE) license. More details can be found in [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
Alcoft/Qwen2.5-7B-Instruct-GGUF | Alcoft | 2025-04-28T05:34:48Z | 22 | 0 | null | [
"gguf",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-01T01:08:44Z | ---
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
---
|
Triangle104/Qwen2.5-3B-Q5_K_S-GGUF | Triangle104 | 2025-04-28T05:34:28Z | 10 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B",
"base_model:quantized:Qwen/Qwen2.5-3B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-22T16:54:59Z | ---
base_model: Qwen/Qwen2.5-3B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-3B-Q5_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-3B`](https://huggingface.co/Qwen/Qwen2.5-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-3B-Q5_K_S-GGUF --hf-file qwen2.5-3b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-3B-Q5_K_S-GGUF --hf-file qwen2.5-3b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-3B-Q5_K_S-GGUF --hf-file qwen2.5-3b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-3B-Q5_K_S-GGUF --hf-file qwen2.5-3b-q5_k_s.gguf -c 2048
```
|
Triangle104/Qwen2.5-3B-Q8_0-GGUF | Triangle104 | 2025-04-28T05:34:02Z | 3 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B",
"base_model:quantized:Qwen/Qwen2.5-3B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-22T17:00:11Z | ---
base_model: Qwen/Qwen2.5-3B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-3B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-3B`](https://huggingface.co/Qwen/Qwen2.5-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-3B-Q8_0-GGUF --hf-file qwen2.5-3b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-3B-Q8_0-GGUF --hf-file qwen2.5-3b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-3B-Q8_0-GGUF --hf-file qwen2.5-3b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-3B-Q8_0-GGUF --hf-file qwen2.5-3b-q8_0.gguf -c 2048
```
|
Triangle104/Qwen2.5-7B-Q8_0-GGUF | Triangle104 | 2025-04-28T05:32:31Z | 1 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B",
"base_model:quantized:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-19T16:20:05Z | ---
base_model: Qwen/Qwen2.5-7B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-7B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-7B`](https://huggingface.co/Qwen/Qwen2.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-7B-Q8_0-GGUF --hf-file qwen2.5-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-7B-Q8_0-GGUF --hf-file qwen2.5-7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Q8_0-GGUF --hf-file qwen2.5-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-7B-Q8_0-GGUF --hf-file qwen2.5-7b-q8_0.gguf -c 2048
```
|
Triangle104/Qwen2.5-14B-Q5_K_M-GGUF | Triangle104 | 2025-04-28T05:31:56Z | 7 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-14B",
"base_model:quantized:Qwen/Qwen2.5-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-19T14:10:24Z | ---
base_model: Qwen/Qwen2.5-14B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-14B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-14B-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-14B`](https://huggingface.co/Qwen/Qwen2.5-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-14B-Q5_K_M-GGUF --hf-file qwen2.5-14b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-14B-Q5_K_M-GGUF --hf-file qwen2.5-14b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-14B-Q5_K_M-GGUF --hf-file qwen2.5-14b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-14B-Q5_K_M-GGUF --hf-file qwen2.5-14b-q5_k_m.gguf -c 2048
```
|
Triangle104/Qwen2.5-32B-Instruct-Q3_K_S-GGUF | Triangle104 | 2025-04-28T05:31:20Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-29T14:13:41Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
- chat
- llama-cpp
- gguf-my-repo
library_name: transformers
---
# Triangle104/Qwen2.5-32B-Instruct-Q3_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) for more details on the model.
---
Model Details:
-
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.
Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.
Long-context Support up to 128K tokens and can generate up to 8K tokens.
Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
This repo contains the instruction-tuned 32B Qwen2.5 model, which has the following features:
Type: Causal Language Models
Training Stage: Pretraining & Post-training
Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
Number of Parameters: 32.5B
Number of Paramaters (Non-Embedding): 31.0B
Number of Layers: 64
Number of Attention Heads (GQA): 40 for Q and 8 for KV
Context Length: Full 131,072 tokens and generation 8192 tokens
Please refer to this section for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our blog, GitHub, and Documentation.
Requirements
The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers.
With transformers<4.37.0, you will encounter the following error:
KeyError: 'qwen2'
Quickstart
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-32B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Processing Long Texts
The current config.json is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize YaRN, a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to config.json to enable YaRN:
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required.
Evaluation & Performance
Detailed evaluation results are reported in this 📑 blog.
For requirements on GPU memory and the respective throughput, see results here.
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q3_K_S-GGUF --hf-file qwen2.5-32b-instruct-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q3_K_S-GGUF --hf-file qwen2.5-32b-instruct-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q3_K_S-GGUF --hf-file qwen2.5-32b-instruct-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q3_K_S-GGUF --hf-file qwen2.5-32b-instruct-q3_k_s.gguf -c 2048
```
|
Triangle104/Qwen2.5-32B-Instruct-Q5_K_S-GGUF | Triangle104 | 2025-04-28T05:30:32Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-29T15:14:45Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
- chat
- llama-cpp
- gguf-my-repo
library_name: transformers
---
# Triangle104/Qwen2.5-32B-Instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) for more details on the model.
---
Model Details:
-
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.
Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.
Long-context Support up to 128K tokens and can generate up to 8K tokens.
Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
This repo contains the instruction-tuned 32B Qwen2.5 model, which has the following features:
Type: Causal Language Models
Training Stage: Pretraining & Post-training
Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
Number of Parameters: 32.5B
Number of Paramaters (Non-Embedding): 31.0B
Number of Layers: 64
Number of Attention Heads (GQA): 40 for Q and 8 for KV
Context Length: Full 131,072 tokens and generation 8192 tokens
Please refer to this section for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our blog, GitHub, and Documentation.
Requirements
The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers.
With transformers<4.37.0, you will encounter the following error:
KeyError: 'qwen2'
Quickstart
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-32B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Processing Long Texts
The current config.json is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize YaRN, a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to config.json to enable YaRN:
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required.
Evaluation & Performance
Detailed evaluation results are reported in this 📑 blog.
For requirements on GPU memory and the respective throughput, see results here.
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-32b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-32b-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-32b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-32B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-32b-instruct-q5_k_s.gguf -c 2048
```
|
mlfoundations-dev/d1_science_shortest_0.3k | mlfoundations-dev | 2025-04-28T05:28:55Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T05:26:15Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_science_shortest_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_science_shortest_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_science_shortest_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0a0+ecf3bae40a.nv25.01
- Datasets 3.5.0
- Tokenizers 0.20.3
|
Triangle104/Qwen2.5-14B-Instruct-Q5_K_S-GGUF | Triangle104 | 2025-04-28T05:28:32Z | 3 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-19T11:47:52Z | ---
base_model: Qwen/Qwen2.5-14B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-14B-Instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-14B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-14B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-14b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-14B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-14b-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-14B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-14b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-14B-Instruct-Q5_K_S-GGUF --hf-file qwen2.5-14b-instruct-q5_k_s.gguf -c 2048
```
|
mlfoundations-dev/d1_science_gpt_1k | mlfoundations-dev | 2025-04-28T05:26:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T05:23:28Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_science_gpt_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_science_gpt_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_science_gpt_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 6
- total_train_batch_size: 96
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0a0+ecf3bae40a.nv25.01
- Datasets 3.5.0
- Tokenizers 0.20.3
|
your-username/healthcare-assistant-lora | your-username | 2025-04-28T05:25:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T17:16:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Qwen2.5-7B-Instruct-Q6_K-GGUF | Triangle104 | 2025-04-28T05:25:15Z | 4 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-19T15:44:44Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-7B-Instruct-Q6_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-q6_k.gguf -c 2048
```
|
Triangle104/Qwen2.5-7B-Instruct-Q8_0-GGUF | Triangle104 | 2025-04-28T05:24:57Z | 1 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-19T15:46:53Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-7B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-q8_0.gguf -c 2048
```
|
dzanbek/c2145cfe-eadf-4b88-bbb3-9d1792fc61c2 | dzanbek | 2025-04-28T05:23:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T05:05:35Z | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c2145cfe-eadf-4b88-bbb3-9d1792fc61c2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 5a632c5faf4d9e56_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5a632c5faf4d9e56_train_data.json
type:
field_input: document_title
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/c2145cfe-eadf-4b88-bbb3-9d1792fc61c2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/5a632c5faf4d9e56_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5e837649-8f38-4a30-ade2-a231d08208ee
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 5e837649-8f38-4a30-ade2-a231d08208ee
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c2145cfe-eadf-4b88-bbb3-9d1792fc61c2
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4051 | 0.0596 | 200 | 2.0123 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ranranrunforit/ppo-SnowballTarget | ranranrunforit | 2025-04-28T05:19:00Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-04-28T05:18:54Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ranranrunforit/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Uraxen/UraxenTabletsIndia | Uraxen | 2025-04-28T05:18:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-28T05:17:53Z | <p><strong>✔️Product Name - <a href="https://www.cbfnl.com/product/uraxen-tablet/">Uraxen</a></strong></p>
<p><strong>✔️Category - Health</strong></p>
<p><strong>✔️Side-Effects - NA</strong></p>
<p><strong>✔️Availability - <a href="https://www.cbfnl.com/Buy-Uraxen">Online</a></strong></p>
<p><strong>✔️Rating - </strong><strong>★★★★★</strong></p>
<p><strong>✔️Price (For Sale) Buy Now Here - (<a href="https://www.cbfnl.com/Buy-Uraxen">CLICK HERE</a>)</strong></p>
<p><strong>Official Site Here: <span data-sheets-root="1"><a href="https://www.cbfnl.com/product/uraxen-tablet/">https://www.cbfnl.com/product/uraxen-tablet/</a> </span></strong></p>
<p><a href="https://www.cbfnl.com/product/uraxen-tablet/">Uraxen</a> is a sophisticated Ayurvedic Tablets for treatment that is aimed at addressing issues like fibroids and benign prostatic hypertrophy in males. It addresses the hormonal imbalance to inhibit the growth of fibroid tissue while also giving relief from BPH symptoms simultaneously. The principal active agents in Uraxen act upon specific receptors, controlling the activity of hormones on tissue growth, leading to the regression of the involved tissue mass, and enhancing the functionality of urine outflow.</p>
<p>Uraxen comes in a handy dosage form, packaged in a box of 30 tablets. One tablet should be taken twice a day after meals for best absorption and effectiveness. Compliance with this schedule is necessary since regular consumption of Uraxen optimizes its therapeutic effects. Missing doses or failure to take the medication as directed may interfere with the drug's effectiveness, which can result in the return of symptoms related to fibroids or BPH.</p>
<p><strong>⇒</strong><strong>➧➧ <a href="https://www.cbfnl.com/Buy-Uraxen">Click Here To Buy Now With Special Offer</a> </strong><strong>➧➧⇒</strong></p>
<p><a href="https://www.facebook.com/groups/uraxentabletsindia">https://www.facebook.com/groups/uraxentabletsindia</a></p>
<p><a href="https://www.facebook.com/groups/uraxentabletsindia/posts/606235442440441/">https://www.facebook.com/groups/uraxentabletsindia/posts/606235442440441/</a></p>
<p><a href="https://www.facebook.com/share/p/18seLrxLg4/">https://www.facebook.com/share/p/18seLrxLg4/</a></p>
<p><a href="https://www.facebook.com/events/1233348221555702/">https://www.facebook.com/events/1233348221555702/</a></p>
<p><a href="https://www.facebook.com/groups/uraxenayurvedicsolution">https://www.facebook.com/groups/uraxenayurvedicsolution</a></p>
<p><a href="https://www.facebook.com/groups/uraxenayurvedicsolution/posts/1776283059975852/">https://www.facebook.com/groups/uraxenayurvedicsolution/posts/1776283059975852/</a></p>
<p><a href="https://www.facebook.com/share/p/19NGKZu6YM/">https://www.facebook.com/share/p/19NGKZu6YM/</a></p>
<p><a href="https://uraxentablets.quora.com/">https://uraxentablets.quora.com/</a></p>
<p><a href="https://uraxentablets.quora.com/https-www-facebook-com-groups-uraxentabletsindia-https-www-facebook-com-groups-uraxentabletsindia-posts-606235442440">https://uraxentablets.quora.com/https-www-facebook-com-groups-uraxentabletsindia-https-www-facebook-com-groups-uraxentabletsindia-posts-606235442440</a></p>
<p><a href="https://www.quora.com/Uraxen-tablets-Work-Properly-For-Prostate/answer/Alexis-Hoganq">https://www.quora.com/Uraxen-tablets-Work-Properly-For-Prostate/answer/Alexis-Hoganq</a></p>
<p><a href="https://teeshopper.in/store/Uraxen-Tablets-India">https://teeshopper.in/store/Uraxen-Tablets-India</a></p>
<p><a href="https://teeshopper.in/store/Uraxen-Ayurvedic-Solution">https://teeshopper.in/store/Uraxen-Ayurvedic-Solution</a></p>
<p><a href="https://online.visual-paradigm.com/share/book/uraxen-tablets-price-review-offers--256k78uw86">https://online.visual-paradigm.com/share/book/uraxen-tablets-price-review-offers--256k78uw86</a></p>
<p><a href="https://online.visual-paradigm.com/share/book/uraxen-tablets-256k5z02o7">https://online.visual-paradigm.com/share/book/uraxen-tablets-256k5z02o7</a></p>
<p><a href="https://knowt.com/note/9ea605ca-0cf7-4ceb-a57a-53f33df5775b/Uraxen-Top-Effective-For-Prostate-Issues">https://knowt.com/note/9ea605ca-0cf7-4ceb-a57a-53f33df5775b/Uraxen-Top-Effective-For-Prostate-Issues</a></p>
<p><a href="https://knowt.com/note/b6d26c4b-dda3-4d9d-a763-2a7e7cc281a0/Uraxen-Ayurvedic-Solution-For-Prostate-H">https://knowt.com/note/b6d26c4b-dda3-4d9d-a763-2a7e7cc281a0/Uraxen-Ayurvedic-Solution-For-Prostate-H</a></p>
<p><a href="https://solo.to/uraxentablets">https://solo.to/uraxentablets</a></p>
<p><a href="https://all4.vip/p/page/view-persons-profile?id=73603">https://all4.vip/p/page/view-persons-profile?id=73603</a></p>
<p><a href="https://all4.vip/p/page/view-photo?id=7591">https://all4.vip/p/page/view-photo?id=7591</a></p>
<p><a href="https://www.click.in/delhi/uraxen-tablets-price-review-offers-c75-v56466276">https://www.click.in/delhi/uraxen-tablets-price-review-offers-c75-v56466276</a> </p> |
mlfoundations-dev/d1_science_mc_llm_0.3k | mlfoundations-dev | 2025-04-28T05:17:03Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T05:14:33Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_science_mc_llm_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_science_mc_llm_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_science_mc_llm_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0a0+ecf3bae40a.nv25.01
- Datasets 3.5.0
- Tokenizers 0.20.3
|
DolphaGo/klue-roberta-base-klue-sts | DolphaGo | 2025-04-28T05:12:12Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10501",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-28T03:29:36Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10501
- loss:CosineSimilarityLoss
base_model: klue/roberta-base
widget:
- source_sentence: 조명등 낮에 키려고 하지마
sentences:
- 아침 샤워는 꼭 찬물 말고 더운물로 해줘
- 일단 숙소는 4인가족이 머무르기 충분한공간입니다
- 올드 시티의 그랜드 마스터 궁전, 고고학 박물관 등을 주로 구경한다면 최고의 위치입니다.
- source_sentence: 요즘 네가 즐겨 보는 뉴스 채널이 뭐야?
sentences:
- 농협이랑 신한 중 청구서를 달마다 메일로 보내게 해둔 곳이 어디지?
- 쓰레기,설거지,빨래를 처리하기에도 아주 좋았구요
- 예능말고 네가 좋아하는 뉴스 채널로 알려줘요
- source_sentence: 일인분 밥 짓는 방법 좀 알려줘
sentences:
- 올해 추석 연휴 날짜가 며칠부터 며칠까지에요?
- 음악 들을 거면 스피커말고 헤드폰으로 듣지 그래
- 더울 때 오래된 음식은 먹지 않도록 해.
- source_sentence: 60년 전, 이 땅에 위대한 민주주의의 역사를 심어주신 주역들께 깊은 존경과 감사 인사를 드립니다.
sentences:
- 60년 전, 저는 이 땅에 민주주의의 위대한 역사를 창조한 사람들에게 깊은 존경과 감사를 표하고 싶습니다.
- 호스트와 양호한 연결 지점입니다.
- 골프치러 내일 만나기로 한 데가 어디야?
- source_sentence: 삼월 메일은 삭제되어선 안돼
sentences:
- 중요한 메일이니 스팸으로 분류하지 말고 삭제금지 설정해줘
- 무드등말고 백열등 켜주세요!
- 나한테 침실에 무드등 밝기 적당한 정도 좀 알려줄래?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on klue/roberta-base
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.9617068435868263
name: Pearson Cosine
- type: spearman_cosine
value: 0.9210402694151972
name: Spearman Cosine
---
# SentenceTransformer based on klue/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [klue/roberta-base](https://huggingface.co/klue/roberta-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [klue/roberta-base](https://huggingface.co/klue/roberta-base) <!-- at revision 02f94ba5e3fcb7e2a58a390b8639b0fac974a8da -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'삼월 메일은 삭제되어선 안돼',
'중요한 메일이니 스팸으로 분류하지 말고 삭제금지 설정해줘',
'무드등말고 백열등 켜주세요!',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.9617 |
| **spearman_cosine** | **0.921** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,501 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 19.36 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 18.96 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.44</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:----------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:--------------------------------|
| <code>아울러 가명처리 등 개인정보 보호 기술 개발과 RD를 위한 협력을 강화하고 지역정보보호센터 등을 활용한 개인정보 보호 전문 인력양성 및 중소기업 개인정보 보호 강화도 추진한다.</code> | <code>이와 함께 가명처리, RD 등 개인정보보호 기술개발 협력을 강화하고, 지역정보보호센터를 활용한 개인정보보호 전문가와 중소기업을 육성할 계획입니다.</code> | <code>0.6599999999999999</code> |
| <code>다음 메일은 사용자의 메일을 최대 몇 기가까지 저장하죠?</code> | <code>다음 메일을 사용할 때 메일이 저장되는 최대 용량은 얼마죠?</code> | <code>0.7</code> |
| <code>그들이 당신을 데리러 지하철역으로 올 것입니다.</code> | <code>그들의 조언과 도움이 없었다면, 이렇게까지 좋은 여행을 할수없었을것입니다.</code> | <code>0.02</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|
| 0.7610 | 500 | 0.0281 | - |
| 1.0 | 657 | - | 0.9101 |
| 1.5221 | 1000 | 0.008 | 0.9185 |
| 2.0 | 1314 | - | 0.9185 |
| 2.2831 | 1500 | 0.0049 | - |
| 3.0 | 1971 | - | 0.9201 |
| 3.0441 | 2000 | 0.0034 | 0.9207 |
| 3.8052 | 2500 | 0.0026 | - |
| 4.0 | 2628 | - | 0.9210 |
### Framework Versions
- Python: 3.9.15
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
fats-fme/befa1a68-b759-41cd-aa37-79f4aaa9a6a5 | fats-fme | 2025-04-28T05:09:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T04:59:23Z | ---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: befa1a68-b759-41cd-aa37-79f4aaa9a6a5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0117447d3950c946_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0117447d3950c946_train_data.json
type:
field_instruction: first_message
field_output: first_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/befa1a68-b759-41cd-aa37-79f4aaa9a6a5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 130GB
max_steps: 50
micro_batch_size: 1
mlflow_experiment_name: /tmp/0117447d3950c946_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dace43b8-8ffb-4c18-baa0-ebd02df71793
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: dace43b8-8ffb-4c18-baa0-ebd02df71793
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# befa1a68-b759-41cd-aa37-79f4aaa9a6a5
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 1.6654 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
New-Jobz-Hunting-Sajal-Malik-18/wATCH.Jobz-Hunting-Sajal-Malik-Viral-Jobz-Hunting-Sajal-Malik.Original | New-Jobz-Hunting-Sajal-Malik-18 | 2025-04-28T05:08:53Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-28T05:08:15Z | <animated-image data-catalyst=""><a href=" https://tinyurl.com/5n7shfr3?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Actor jobz hunting sajal malik Original V𝚒deo V𝚒deo took the internet by storm and amazed viewers on various social media platforms. Actor jobz hunting sajal malik, a young and talented digital creator, recently became famous thanks to this interesting V𝚒deo.
L𝚎aked V𝚒deo Actor jobz hunting sajal malik V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media Telegram X Trending Tiktok (18+)
L𝚎aked V𝚒deo Actor jobz hunting sajal malik V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media X Trending Tiktok (18+)
L𝚎aked V𝚒deo Actor jobz hunting sajal malik Original V𝚒deo V𝚒ral V𝚒deo L𝚎aked on X Twitter |
vermoney/581a182e-8e0f-4e40-a116-4ae667a9d44d | vermoney | 2025-04-28T05:08:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T05:01:50Z | ---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 581a182e-8e0f-4e40-a116-4ae667a9d44d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0117447d3950c946_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0117447d3950c946_train_data.json
type:
field_instruction: first_message
field_output: first_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vermoney/581a182e-8e0f-4e40-a116-4ae667a9d44d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/0117447d3950c946_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dace43b8-8ffb-4c18-baa0-ebd02df71793
wandb_project: s56-9
wandb_run: your_name
wandb_runid: dace43b8-8ffb-4c18-baa0-ebd02df71793
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 581a182e-8e0f-4e40-a116-4ae667a9d44d
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0605 | 0.0756 | 200 | 1.3681 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
EkDyP4ZRP28/dkkgf | EkDyP4ZRP28 | 2025-04-28T05:07:47Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T05:07:44Z | ---
license: apache-2.0
---
|
KlnVx1PYEPYE/kshhjsgf | KlnVx1PYEPYE | 2025-04-28T05:07:08Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T05:07:07Z | ---
license: apache-2.0
---
|
mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF | mradermacher | 2025-04-28T05:05:03Z | 365 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:LyraNovaHeart/Stellar-Odyssey-12b-Adventure-v0.0",
"base_model:quantized:LyraNovaHeart/Stellar-Odyssey-12b-Adventure-v0.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-12T04:41:21Z | ---
base_model: LyraNovaHeart/Stellar-Odyssey-12b-Adventure-v0.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/LyraNovaHeart/Stellar-Odyssey-12b-Adventure-v0.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Stellar-Odyssey-12b-Adventure-v0.0-i1-GGUF/resolve/main/Stellar-Odyssey-12b-Adventure-v0.0.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
cheny4855/medical-question-model | cheny4855 | 2025-04-28T05:03:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-28T03:30:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jspsoli/SS_Stable_Diffusion_1.5_Lora_Collection | jspsoli | 2025-04-28T05:02:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-28T00:57:23Z | This is a random collection of old loras for Stable Diffusion 1.5.
They are split into 3 major categories: Characters, Concepts and Styles.
Loras in the Characters category are further split into: Girls&misc, Boys and Girlspack.
Each lora is stored in a folder named by its index as listed in their respective category_index.txt file located at the root directory of this repository.
Many of them contain a filename.png preview as well as a filename.txt file with information extracted from their CivitAI model page at the time they were downloaded.
Some of them also contain a filename-metadata.json file with their metadata extracted and stored in plain text .json format.
Most loras are showcased in their respective category_grid.jpg file located at the root directory of this repository.
At the root directory of this repository you will also find index.txt files for each category linking the index of each lora to its filename.
Use the image grids to search by preview and the index.txt files to search by name.
Note:
- The Characters and Concepts grid images contain the vast majority (>99%) of previews of their respective category - but a few are missing.
- Most of the Style loras are NOT showcased in styles_grid.jpg - this is because most of them were not downloaded from CivitAI and they either did not include a preview or it wasn't parsed by my organizer program.
- Very few loras might be in the wrong category. |
mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF | mradermacher | 2025-04-28T05:01:39Z | 239 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksTesting/Alkahest-V9.4-LLaMa-70B",
"base_model:quantized:TareksTesting/Alkahest-V9.4-LLaMa-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T02:56:45Z | ---
base_model: TareksTesting/Alkahest-V9.4-LLaMa-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TareksTesting/Alkahest-V9.4-LLaMa-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.Q5_K_M.gguf) | Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V9.4-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.4-LLaMa-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF | mradermacher | 2025-04-28T04:59:19Z | 235 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:LyraNovaHeart/Celestial-Harmony-14b-v1.0-Experimental-1016",
"base_model:quantized:LyraNovaHeart/Celestial-Harmony-14b-v1.0-Experimental-1016",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-13T14:48:56Z | ---
base_model: LyraNovaHeart/Celestial-Harmony-14b-v1.0-Experimental-1016
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LyraNovaHeart/Celestial-Harmony-14b-v1.0-Experimental-1016
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/resolve/main/Celestial-Harmony-14b-v1.0-Experimental-1016.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/resolve/main/Celestial-Harmony-14b-v1.0-Experimental-1016.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/resolve/main/Celestial-Harmony-14b-v1.0-Experimental-1016.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/resolve/main/Celestial-Harmony-14b-v1.0-Experimental-1016.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/resolve/main/Celestial-Harmony-14b-v1.0-Experimental-1016.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/resolve/main/Celestial-Harmony-14b-v1.0-Experimental-1016.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/resolve/main/Celestial-Harmony-14b-v1.0-Experimental-1016.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/resolve/main/Celestial-Harmony-14b-v1.0-Experimental-1016.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/resolve/main/Celestial-Harmony-14b-v1.0-Experimental-1016.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/resolve/main/Celestial-Harmony-14b-v1.0-Experimental-1016.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Celestial-Harmony-14b-v1.0-Experimental-1016-GGUF/resolve/main/Celestial-Harmony-14b-v1.0-Experimental-1016.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
18-Jobz-Hunting-Sajal-Malik-New-3-X/TRENDING.Jobz.Hunting.Sajal.Malik.Viral.Video.Leaks.Tutorial | 18-Jobz-Hunting-Sajal-Malik-New-3-X | 2025-04-28T04:58:33Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-28T04:58:02Z | <animated-image data-catalyst=""><a href=" https://tinyurl.com/5n7shfr3?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Actor jobz hunting sajal malik Original V𝚒deo V𝚒deo took the internet by storm and amazed viewers on various social media platforms. Actor jobz hunting sajal malik, a young and talented digital creator, recently became famous thanks to this interesting V𝚒deo.
L𝚎aked V𝚒deo Actor jobz hunting sajal malik V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media Telegram X Trending Tiktok (18+)
L𝚎aked V𝚒deo Actor jobz hunting sajal malik V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media X Trending Tiktok (18+)
L𝚎aked V𝚒deo Actor jobz hunting sajal malik Original V𝚒deo V𝚒ral V𝚒deo L𝚎aked on X Twitter
|
rizkysulaeman/Gemma3-4B-multimodal-en-ft-v1-Q4_0-GGUF | rizkysulaeman | 2025-04-28T04:56:49Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:CALISTA-INDUSTRY/Gemma3-4B-multimodal-en-ft-v1",
"base_model:quantized:CALISTA-INDUSTRY/Gemma3-4B-multimodal-en-ft-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T04:56:46Z | ---
base_model: CALISTA-INDUSTRY/Gemma3-4B-multimodal-en-ft-v1
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- llama-cpp
- gguf-my-repo
---
# rizkysulaeman/Gemma3-4B-multimodal-en-ft-v1-Q4_0-GGUF
This model was converted to GGUF format from [`CALISTA-INDUSTRY/Gemma3-4B-multimodal-en-ft-v1`](https://huggingface.co/CALISTA-INDUSTRY/Gemma3-4B-multimodal-en-ft-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CALISTA-INDUSTRY/Gemma3-4B-multimodal-en-ft-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rizkysulaeman/Gemma3-4B-multimodal-en-ft-v1-Q4_0-GGUF --hf-file gemma3-4b-multimodal-en-ft-v1-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rizkysulaeman/Gemma3-4B-multimodal-en-ft-v1-Q4_0-GGUF --hf-file gemma3-4b-multimodal-en-ft-v1-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rizkysulaeman/Gemma3-4B-multimodal-en-ft-v1-Q4_0-GGUF --hf-file gemma3-4b-multimodal-en-ft-v1-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rizkysulaeman/Gemma3-4B-multimodal-en-ft-v1-Q4_0-GGUF --hf-file gemma3-4b-multimodal-en-ft-v1-q4_0.gguf -c 2048
```
|
rizkysulaeman/Gemma3-4B-en-ft-v1-Q4_0-GGUF | rizkysulaeman | 2025-04-28T04:54:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:CALISTA-INDUSTRY/Gemma3-4B-en-ft-v1",
"base_model:quantized:CALISTA-INDUSTRY/Gemma3-4B-en-ft-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T04:54:01Z | ---
base_model: CALISTA-INDUSTRY/Gemma3-4B-en-ft-v1
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- llama-cpp
- gguf-my-repo
---
# rizkysulaeman/Gemma3-4B-en-ft-v1-Q4_0-GGUF
This model was converted to GGUF format from [`CALISTA-INDUSTRY/Gemma3-4B-en-ft-v1`](https://huggingface.co/CALISTA-INDUSTRY/Gemma3-4B-en-ft-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CALISTA-INDUSTRY/Gemma3-4B-en-ft-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rizkysulaeman/Gemma3-4B-en-ft-v1-Q4_0-GGUF --hf-file gemma3-4b-en-ft-v1-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rizkysulaeman/Gemma3-4B-en-ft-v1-Q4_0-GGUF --hf-file gemma3-4b-en-ft-v1-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rizkysulaeman/Gemma3-4B-en-ft-v1-Q4_0-GGUF --hf-file gemma3-4b-en-ft-v1-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rizkysulaeman/Gemma3-4B-en-ft-v1-Q4_0-GGUF --hf-file gemma3-4b-en-ft-v1-q4_0.gguf -c 2048
```
|
rizkysulaeman/Gemma3-4B-multimodal-en-ft-v1-Q8_0-GGUF | rizkysulaeman | 2025-04-28T04:46:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:CALISTA-INDUSTRY/Gemma3-4B-multimodal-en-ft-v1",
"base_model:quantized:CALISTA-INDUSTRY/Gemma3-4B-multimodal-en-ft-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T04:46:25Z | ---
base_model: CALISTA-INDUSTRY/Gemma3-4B-multimodal-en-ft-v1
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- llama-cpp
- gguf-my-repo
---
# rizkysulaeman/Gemma3-4B-multimodal-en-ft-v1-Q8_0-GGUF
This model was converted to GGUF format from [`CALISTA-INDUSTRY/Gemma3-4B-multimodal-en-ft-v1`](https://huggingface.co/CALISTA-INDUSTRY/Gemma3-4B-multimodal-en-ft-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CALISTA-INDUSTRY/Gemma3-4B-multimodal-en-ft-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rizkysulaeman/Gemma3-4B-multimodal-en-ft-v1-Q8_0-GGUF --hf-file gemma3-4b-multimodal-en-ft-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rizkysulaeman/Gemma3-4B-multimodal-en-ft-v1-Q8_0-GGUF --hf-file gemma3-4b-multimodal-en-ft-v1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rizkysulaeman/Gemma3-4B-multimodal-en-ft-v1-Q8_0-GGUF --hf-file gemma3-4b-multimodal-en-ft-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rizkysulaeman/Gemma3-4B-multimodal-en-ft-v1-Q8_0-GGUF --hf-file gemma3-4b-multimodal-en-ft-v1-q8_0.gguf -c 2048
```
|
mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF | mradermacher | 2025-04-28T04:43:01Z | 46 | 0 | transformers | [
"transformers",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:llamafy/Qwen-Qwen2.5-7B-llamafied",
"base_model:quantized:llamafy/Qwen-Qwen2.5-7B-llamafied",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-17T05:31:30Z | ---
base_model: llamafy/Qwen-Qwen2.5-7B-llamafied
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/llamafy/Qwen-Qwen2.5-7B-llamafied
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-Qwen2.5-7B-llamafied-GGUF/resolve/main/Qwen-Qwen2.5-7B-llamafied.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
marciagrateful/marciagrateful | marciagrateful | 2025-04-28T04:39:38Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-28T04:39:38Z | ---
license: bigscience-openrail-m
---
|
TOMFORD79/S1 | TOMFORD79 | 2025-04-28T04:34:41Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-28T04:02:02Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
aleegis/ec3851b9-4056-4247-98af-b83d2a5be1c8 | aleegis | 2025-04-28T04:33:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T03:58:27Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ec3851b9-4056-4247-98af-b83d2a5be1c8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- f2392decb627cf18_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f2392decb627cf18_train_data.json
type:
field_input: statements
field_instruction: quiz
field_output: solution_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/ec3851b9-4056-4247-98af-b83d2a5be1c8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/f2392decb627cf18_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: a54f4409-dd56-46d7-8e17-1d233ee1e00a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a54f4409-dd56-46d7-8e17-1d233ee1e00a
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# ec3851b9-4056-4247-98af-b83d2a5be1c8
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF | mradermacher | 2025-04-28T04:31:45Z | 101 | 0 | transformers | [
"transformers",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"base_model:nbeerbower/Qwen2.5-Gutenberg-Doppel-32B",
"base_model:quantized:nbeerbower/Qwen2.5-Gutenberg-Doppel-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-18T21:37:41Z | ---
base_model: nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Gutenberg-Doppel-32B-i1-GGUF/resolve/main/Qwen2.5-Gutenberg-Doppel-32B.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nqdhocai/LogicLlama-3.2-1B-NoDes-v0 | nqdhocai | 2025-04-28T04:29:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T04:27:45Z | ---
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nqdhocai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Williams10312/medical-question-model | Williams10312 | 2025-04-28T04:27:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-28T04:27:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF | mradermacher | 2025-04-28T04:22:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksTesting/Alkahest-V10-LLaMa-70B",
"base_model:quantized:TareksTesting/Alkahest-V10-LLaMa-70B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T00:25:53Z | ---
base_model: TareksTesting/Alkahest-V10-LLaMa-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TareksTesting/Alkahest-V10-LLaMa-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V10-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V10-LLaMa-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
suriacaa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quiet_shaggy_skunk | suriacaa | 2025-04-28T04:19:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am quiet shaggy skunk",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T03:23:15Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quiet_shaggy_skunk
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am quiet shaggy skunk
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quiet_shaggy_skunk
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="suriacaa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quiet_shaggy_skunk", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MB55/llmlein5-instruction-tuning | MB55 | 2025-04-28T04:19:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:LSX-UniWue/LLaMmlein_7B_chat",
"base_model:adapter:LSX-UniWue/LLaMmlein_7B_chat",
"region:us"
] | null | 2025-04-28T04:19:30Z | ---
base_model: LSX-UniWue/LLaMmlein_7B_chat
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Subsets and Splits