modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 00:46:34
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 00:44:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ds4sd/SmolDocling-256M-preview-mlx-bf16-docling-snap | ds4sd | 2025-05-05T15:54:16Z | 46 | 0 | transformers | [
"transformers",
"safetensors",
"smolvlm",
"image-text-to-text",
"mlx",
"conversational",
"en",
"base_model:ds4sd/SmolDocling-256M-preview",
"base_model:finetune:ds4sd/SmolDocling-256M-preview",
"license:cdla-permissive-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-30T08:32:34Z | ---
base_model:
- ds4sd/SmolDocling-256M-preview
language:
- en
library_name: transformers
license: cdla-permissive-2.0
pipeline_tag: image-text-to-text
tags:
- mlx
---
# SmolDocling-256M-preview-mlx-bf16
This model was converted to MLX format from [`ds4sd/SmolDocling-256M-preview`]() using mlx-vlm version **0.1.18**.
Has configuration files adapted for usage with Docling-Snap.
Refer to the [original model card](https://huggingface.co/ds4sd/SmolDocling-256M-preview) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm pillow docling-core
```
```python
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "docling-core",
# "mlx-vlm",
# "pillow",
# ]
# ///
from io import BytesIO
from pathlib import Path
from urllib.parse import urlparse
import requests
from PIL import Image
from docling_core.types.doc import ImageRefMode
from docling_core.types.doc.document import DocTagsDocument, DoclingDocument
from mlx_vlm import load, generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config, stream_generate
## Settings
SHOW_IN_BROWSER = True # Export output as HTML and open in webbrowser.
## Load the model
model_path = "ds4sd/SmolDocling-256M-preview-mlx-bf16"
model, processor = load(model_path)
config = load_config(model_path)
## Prepare input
prompt = "Convert this page to docling."
# image = "https://ibm.biz/docling-page-with-list"
image = "https://ibm.biz/docling-page-with-table"
# Load image resource
if urlparse(image).scheme != "": # it is a URL
response = requests.get(image, stream=True, timeout=10)
response.raise_for_status()
pil_image = Image.open(BytesIO(response.content))
else:
pil_image = Image.open(image)
# Apply chat template
formatted_prompt = apply_chat_template(processor, config, prompt, num_images=1)
## Generate output
print("DocTags: \n\n")
output = ""
for token in stream_generate(
model, processor, formatted_prompt, [image], max_tokens=4096, verbose=False
):
output += token.text
print(token.text, end="")
if "</doctag>" in token.text:
break
print("\n\n")
# Populate document
doctags_doc = DocTagsDocument.from_doctags_and_image_pairs([output], [pil_image])
# create a docling document
doc = DoclingDocument(name="SampleDocument")
doc.load_from_doctags(doctags_doc)
## Export as any format
# Markdown
print("Markdown: \n\n")
print(doc.export_to_markdown())
# HTML
if SHOW_IN_BROWSER:
import webbrowser
out_path = Path("./output.html")
doc.save_as_html(out_path, image_mode=ImageRefMode.EMBEDDED)
webbrowser.open(f"file:///{str(out_path.resolve())}")
``` |
BKM1804/Qwen2-0.5B-Instruct-238eef0f-6d85-4b49-b057-e5bb0ed45a7f-dpo-tuned-merged | BKM1804 | 2025-05-05T15:52:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"dpo",
"conversational",
"en",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T15:51:53Z | ---
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
- dpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BKM1804
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-0.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
michaelmaida/med-doc-recorder-0.0.0.18 | michaelmaida | 2025-05-05T15:49:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:quantized:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T15:40:53Z | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** michaelmaida
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gamalben/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-purring_wild_dingo | gamalben | 2025-05-05T15:46:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am purring wild dingo",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T15:36:40Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-purring_wild_dingo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am purring wild dingo
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-purring_wild_dingo
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gamalben/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-purring_wild_dingo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
DoomerHope/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-feline_prickly_mantis | DoomerHope | 2025-05-05T15:45:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am feline prickly mantis",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T14:03:59Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-feline_prickly_mantis
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am feline prickly mantis
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-feline_prickly_mantis
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="DoomerHope/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-feline_prickly_mantis", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
shibajustfor/435bd8ce-82e3-472f-981a-77a5bd571ebb | shibajustfor | 2025-05-05T15:45:11Z | 0 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T15:44:31Z | ---
library_name: transformers
model_name: shibajustfor/435bd8ce-82e3-472f-981a-77a5bd571ebb
tags:
- generated_from_trainer
- unsloth
licence: license
---
# Model Card for shibajustfor/435bd8ce-82e3-472f-981a-77a5bd571ebb
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
pinakiz/ingredient-LLama | pinakiz | 2025-05-05T15:42:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2025-05-05T15:41:19Z | ---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
stabgan/gemma-3-1b-pt-chkpt-v5-dosage-smoothed | stabgan | 2025-05-05T15:41:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:stabgan/gemma-3-1b-pt-chkpt-v4",
"base_model:finetune:stabgan/gemma-3-1b-pt-chkpt-v4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T15:40:58Z | ---
base_model: stabgan/gemma-3-1b-pt-chkpt-v4
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** stabgan
- **License:** apache-2.0
- **Finetuned from model :** stabgan/gemma-3-1b-pt-chkpt-v4
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sabafallah/Qwen3-0.6B-Q4_K_M-GGUF | sabafallah | 2025-05-05T15:41:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-05T15:41:40Z | ---
base_model: Qwen/Qwen3-0.6B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# sabafallah/Qwen3-0.6B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-0.6B`](https://huggingface.co/Qwen/Qwen3-0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-0.6B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sabafallah/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sabafallah/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sabafallah/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sabafallah/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -c 2048
```
|
memeviss/zombieXXI_9 | memeviss | 2025-05-05T15:38:12Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-05T15:34:03Z | # Optimized TTS Model
This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques.
## Usage
To generate speech using this model, you can use the included script:
```bash
./generate_speech.py --text "Your text here" --output_path output.wav
```
For more details, see the optimization report in this directory.
|
memeviss/zombieXXI_6 | memeviss | 2025-05-05T15:37:17Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-05T15:34:01Z | # Optimized TTS Model
This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques.
## Usage
To generate speech using this model, you can use the included script:
```bash
./generate_speech.py --text "Your text here" --output_path output.wav
```
For more details, see the optimization report in this directory.
|
llmismylife/gustel | llmismylife | 2025-05-05T15:36:22Z | 0 | 0 | null | [
"text-generation",
"autotrain",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-05T15:06:14Z | ---
tags:
- text-generation
- autotrain
license: apache-2.0
pipeline_tag: text-generation
--- |
memeviss/zombieXXI_3 | memeviss | 2025-05-05T15:36:11Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-05T15:34:00Z | # Optimized TTS Model
This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques.
## Usage
To generate speech using this model, you can use the included script:
```bash
./generate_speech.py --text "Your text here" --output_path output.wav
```
For more details, see the optimization report in this directory.
|
MohammadKhosravi/gemma-3 | MohammadKhosravi | 2025-05-05T15:35:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"dataset:lavita/ChatDoctor-HealthCareMagic-100k",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T15:28:46Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
datasets:
- lavita/ChatDoctor-HealthCareMagic-100k
---
# Uploaded model
- **Developed by:** MohammadKhosravi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
- **Dataset :** lavita/ChatDoctor-HealthCareMagic-100k
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
WizzyRocky/q-learning-taxi-v3 | WizzyRocky | 2025-05-05T15:32:30Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-05T15:30:56Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="WizzyRocky/q-learning-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dimsavva/qwen3-tw-8b-lora | dimsavva | 2025-05-05T15:32:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T12:45:50Z | ---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** dimsavva
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WizzyRocky/q-FrozenLake-v1-4x4-noSlippery | WizzyRocky | 2025-05-05T15:32:07Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-05T15:23:59Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="WizzyRocky/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ibrahkadabra/mistral_500step | ibrahkadabra | 2025-05-05T15:30:30Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T15:26:55Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ibrahkadabra
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hegdeadithyak/finbot-mistral-pro | hegdeadithyak | 2025-05-05T15:28:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2025-05-05T15:27:28Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
DungND1107/CMC | DungND1107 | 2025-05-05T15:27:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2025-05-05T15:26:30Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Triangle104/The-Omega-Directive-Qwen3-14B-v1.1-Q5_K_S-GGUF | Triangle104 | 2025-05-05T15:21:12Z | 0 | 0 | null | [
"gguf",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"adult",
"ERP",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1",
"base_model:finetune:ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-05T15:19:26Z | ---
base_model: ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- adult
- ERP
- llama-cpp
- gguf-my-repo
base_model_relation: finetune
---
# Triangle104/The-Omega-Directive-Qwen3-14B-v1.1-Q5_K_S-GGUF
This model was converted to GGUF format from [`ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1`](https://huggingface.co/ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1) for more details on the model.
---
This evolution of Forgotten-Safeword delivers coherent depravity with unprecedented immersion:
- 🧬 Expanded 22M Token Dataset - Incorporating 90 erotic novels and 6,496 kink scenarios
- ⚡ Optimized Architecture - Smoother training curve yields more intelligent outputs
- 💎 Balanced Depravity - Retains Forgotten-Safeword's edge while reducing jarring inconsistencies
- 📜 Enhanced Character Piloting - Characters exhibit more nuanced personalities and motivations
- 🌹 Unexpected Depth - Occasionally surprises with profound insights amidst the debauchery
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/The-Omega-Directive-Qwen3-14B-v1.1-Q5_K_S-GGUF --hf-file the-omega-directive-qwen3-14b-v1.1-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/The-Omega-Directive-Qwen3-14B-v1.1-Q5_K_S-GGUF --hf-file the-omega-directive-qwen3-14b-v1.1-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/The-Omega-Directive-Qwen3-14B-v1.1-Q5_K_S-GGUF --hf-file the-omega-directive-qwen3-14b-v1.1-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/The-Omega-Directive-Qwen3-14B-v1.1-Q5_K_S-GGUF --hf-file the-omega-directive-qwen3-14b-v1.1-q5_k_s.gguf -c 2048
```
|
buyna771/bert-poem-mn | buyna771 | 2025-05-05T15:21:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-05T15:20:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kazandaev/opus-mt-en-ru-finetuned | kazandaev | 2025-05-05T15:21:00Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"rust",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-ru-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ru-finetuned
This model is a fine-tuned version of [kazandaev/opus-mt-en-ru-finetuned](https://huggingface.co/kazandaev/opus-mt-en-ru-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7763
- Bleu: 41.0065
- Gen Len: 29.7548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 49
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 0.6903 | 1.0 | 35147 | 0.7779 | 40.9223 | 29.7846 |
| 0.6999 | 2.0 | 70294 | 0.7776 | 40.8267 | 29.8421 |
| 0.7257 | 3.0 | 105441 | 0.7769 | 40.8549 | 29.8765 |
| 0.7238 | 4.0 | 140588 | 0.7763 | 41.0225 | 29.7129 |
| 0.7313 | 5.0 | 175735 | 0.7763 | 41.0065 | 29.7548 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
lyuricky/Qwen3-vLLM | lyuricky | 2025-05-05T15:20:39Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T15:20:38Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lyuricky
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GodsentIzzy97/pythia-mental-health-counseling | GodsentIzzy97 | 2025-05-05T15:18:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T03:39:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alicia10/Llama-3.2-1B-unsloth-bnb-4bit-ko-wiki-ppl-filtering-v3 | alicia10 | 2025-05-05T15:18:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T14:49:52Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** alicia10
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kostiantynk-outlook/be9d70a4-eeed-4ea5-adfb-d8a329a7e912 | kostiantynk-outlook | 2025-05-05T15:14:12Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:7d86a6247c83b576_train_data.json",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"region:us"
] | null | 2025-05-05T15:13:52Z | ---
library_name: peft
tags:
- generated_from_trainer
datasets:
- 7d86a6247c83b576_train_data.json
base_model: unsloth/Qwen2.5-1.5B
model-index:
- name: kostiantynk-outlook/be9d70a4-eeed-4ea5-adfb-d8a329a7e912
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kostiantynk-outlook/be9d70a4-eeed-4ea5-adfb-d8a329a7e912
This model was trained from scratch on the /workspace/input_data/7d86a6247c83b576_train_data.json dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
jnjj/my-model-Q2_K-GGUF | jnjj | 2025-05-05T15:13:55Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"causal-lm",
"peft",
"autotrain",
"llama-cpp",
"gguf-my-repo",
"base_model:jnjj/my-model",
"base_model:quantized:jnjj/my-model",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-04T06:56:05Z | ---
base_model: jnjj/my-model
library_name: transformers
license: apache-2.0
tags:
- causal-lm
- peft
- autotrain
- llama-cpp
- gguf-my-repo
cardData:
model-index:
- name: my-model
results:
- task:
type: text-generation
metrics:
- type: perplexity
value: 0.0
---
# jnjj/my-model-Q2_K-GGUF
This model was converted to GGUF format from [`jnjj/my-model`](https://huggingface.co/jnjj/my-model) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jnjj/my-model) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jnjj/my-model-Q2_K-GGUF --hf-file my-model-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jnjj/my-model-Q2_K-GGUF --hf-file my-model-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jnjj/my-model-Q2_K-GGUF --hf-file my-model-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jnjj/my-model-Q2_K-GGUF --hf-file my-model-q2_k.gguf -c 2048
```
|
ChevalierJoseph/fontastic | ChevalierJoseph | 2025-05-05T15:11:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-05T03:39:10Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ChevalierJoseph
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/The-Omega-Directive-Qwen3-14B-v1.1-Q4_K_S-GGUF | Triangle104 | 2025-05-05T15:09:23Z | 0 | 0 | null | [
"gguf",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"adult",
"ERP",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1",
"base_model:finetune:ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-05T15:06:44Z | ---
base_model: ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- adult
- ERP
- llama-cpp
- gguf-my-repo
base_model_relation: finetune
---
# Triangle104/The-Omega-Directive-Qwen3-14B-v1.1-Q4_K_S-GGUF
This model was converted to GGUF format from [`ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1`](https://huggingface.co/ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1) for more details on the model.
---
This evolution of Forgotten-Safeword delivers coherent depravity with unprecedented immersion:
- 🧬 Expanded 22M Token Dataset - Incorporating 90 erotic novels and 6,496 kink scenarios
- ⚡ Optimized Architecture - Smoother training curve yields more intelligent outputs
- 💎 Balanced Depravity - Retains Forgotten-Safeword's edge while reducing jarring inconsistencies
- 📜 Enhanced Character Piloting - Characters exhibit more nuanced personalities and motivations
- 🌹 Unexpected Depth - Occasionally surprises with profound insights amidst the debauchery
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/The-Omega-Directive-Qwen3-14B-v1.1-Q4_K_S-GGUF --hf-file the-omega-directive-qwen3-14b-v1.1-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/The-Omega-Directive-Qwen3-14B-v1.1-Q4_K_S-GGUF --hf-file the-omega-directive-qwen3-14b-v1.1-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/The-Omega-Directive-Qwen3-14B-v1.1-Q4_K_S-GGUF --hf-file the-omega-directive-qwen3-14b-v1.1-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/The-Omega-Directive-Qwen3-14B-v1.1-Q4_K_S-GGUF --hf-file the-omega-directive-qwen3-14b-v1.1-q4_k_s.gguf -c 2048
```
|
PQPQPQHUST/CACTUS-Qwen3-8B-300 | PQPQPQHUST | 2025-05-05T15:08:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T18:03:03Z | ---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** PQPQPQHUST
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thejaminator/control-low-medical-4e-05-0-0insec-0-mcq0-qwen3_32b | thejaminator | 2025-05-05T15:06:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T15:06:14Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KaHa7/Qwen2-0.5B-GRPO-test | KaHa7 | 2025-05-05T15:06:30Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T13:36:02Z | ---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="KaHa7/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jmalejandrob79/nbmamfm | jmalejandrob79 | 2025-05-05T15:01:08Z | 128 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-04T12:17:39Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nbmamfm
---
# Nbmamfm
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nbmamfm` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nbmamfm",
"lora_weights": "https://huggingface.co/jmalejandrob79/nbmamfm/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jmalejandrob79/nbmamfm', weight_name='lora.safetensors')
image = pipeline('nbmamfm').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/jmalejandrob79/nbmamfm/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/g3-27b-beepo-mmtest-i1-GGUF | mradermacher | 2025-05-05T15:00:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ToastyPigeon/g3-27b-beepo-mmtest",
"base_model:quantized:ToastyPigeon/g3-27b-beepo-mmtest",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-05T12:12:29Z | ---
base_model: ToastyPigeon/g3-27b-beepo-mmtest
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ToastyPigeon/g3-27b-beepo-mmtest
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-IQ1_S.gguf) | i1-IQ1_S | 6.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-IQ1_M.gguf) | i1-IQ1_M | 6.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-IQ2_XS.gguf) | i1-IQ2_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-IQ2_S.gguf) | i1-IQ2_S | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-IQ2_M.gguf) | i1-IQ2_M | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-Q2_K_S.gguf) | i1-Q2_K_S | 9.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-Q2_K.gguf) | i1-Q2_K | 10.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 10.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-IQ3_XS.gguf) | i1-IQ3_XS | 11.7 | |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-IQ3_S.gguf) | i1-IQ3_S | 12.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-Q3_K_S.gguf) | i1-Q3_K_S | 12.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-IQ3_M.gguf) | i1-IQ3_M | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-Q3_K_M.gguf) | i1-Q3_K_M | 13.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-Q3_K_L.gguf) | i1-Q3_K_L | 14.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-IQ4_XS.gguf) | i1-IQ4_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-Q4_0.gguf) | i1-Q4_0 | 15.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-Q4_K_S.gguf) | i1-Q4_K_S | 15.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-Q4_K_M.gguf) | i1-Q4_K_M | 16.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-Q4_1.gguf) | i1-Q4_1 | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-Q5_K_S.gguf) | i1-Q5_K_S | 18.9 | |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-Q5_K_M.gguf) | i1-Q5_K_M | 19.4 | |
| [GGUF](https://huggingface.co/mradermacher/g3-27b-beepo-mmtest-i1-GGUF/resolve/main/g3-27b-beepo-mmtest.i1-Q6_K.gguf) | i1-Q6_K | 22.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/GLM4-9B-Neon-v2-GGUF | mradermacher | 2025-05-05T15:00:08Z | 88 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:allura-org/Celeste-Filtered",
"dataset:allura-org/neon-41k",
"dataset:EVA-UNIT-01/Lilith-v0.2",
"base_model:allura-org/GLM4-9B-Neon-v2",
"base_model:quantized:allura-org/GLM4-9B-Neon-v2",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-04T15:09:35Z | ---
base_model: allura-org/GLM4-9B-Neon-v2
datasets:
- allura-org/Celeste-Filtered
- allura-org/neon-41k
- EVA-UNIT-01/Lilith-v0.2
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/allura-org/GLM4-9B-Neon-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-GGUF/resolve/main/GLM4-9B-Neon-v2.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-GGUF/resolve/main/GLM4-9B-Neon-v2.Q3_K_S.gguf) | Q3_K_S | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-GGUF/resolve/main/GLM4-9B-Neon-v2.Q3_K_M.gguf) | Q3_K_M | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-GGUF/resolve/main/GLM4-9B-Neon-v2.Q3_K_L.gguf) | Q3_K_L | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-GGUF/resolve/main/GLM4-9B-Neon-v2.IQ4_XS.gguf) | IQ4_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-GGUF/resolve/main/GLM4-9B-Neon-v2.Q4_K_S.gguf) | Q4_K_S | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-GGUF/resolve/main/GLM4-9B-Neon-v2.Q4_K_M.gguf) | Q4_K_M | 6.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-GGUF/resolve/main/GLM4-9B-Neon-v2.Q5_K_S.gguf) | Q5_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-GGUF/resolve/main/GLM4-9B-Neon-v2.Q5_K_M.gguf) | Q5_K_M | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-GGUF/resolve/main/GLM4-9B-Neon-v2.Q6_K.gguf) | Q6_K | 8.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-GGUF/resolve/main/GLM4-9B-Neon-v2.Q8_0.gguf) | Q8_0 | 10.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/GLM4-9B-Neon-v2-GGUF/resolve/main/GLM4-9B-Neon-v2.f16.gguf) | f16 | 18.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Vlad100/Qwen3-32B-abliterated-Q8_0-GGUF | Vlad100 | 2025-05-05T15:00:03Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:roslein/Qwen3-32B-abliterated",
"base_model:quantized:roslein/Qwen3-32B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T14:57:24Z | ---
base_model: roslein/Qwen3-32B-abliterated
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Vlad100/Qwen3-32B-abliterated-Q8_0-GGUF
This model was converted to GGUF format from [`roslein/Qwen3-32B-abliterated`](https://huggingface.co/roslein/Qwen3-32B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/roslein/Qwen3-32B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Vlad100/Qwen3-32B-abliterated-Q8_0-GGUF --hf-file qwen3-32b-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Vlad100/Qwen3-32B-abliterated-Q8_0-GGUF --hf-file qwen3-32b-abliterated-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Vlad100/Qwen3-32B-abliterated-Q8_0-GGUF --hf-file qwen3-32b-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Vlad100/Qwen3-32B-abliterated-Q8_0-GGUF --hf-file qwen3-32b-abliterated-q8_0.gguf -c 2048
```
|
Ayushsingh1009/mathlama | Ayushsingh1009 | 2025-05-05T14:59:29Z | 0 | 0 | null | [
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-05-05T14:56:00Z | ---
license: mit
tags:
- unsloth
---
|
c0sm1c9/restaurant-review-analyzer-dutch | c0sm1c9 | 2025-05-05T14:58:22Z | 49 | 1 | null | [
"safetensors",
"xlm-roberta",
"dutch",
"multilingual",
"restaurant-reviews",
"sentiment-analysis",
"multi-head",
"regression",
"dataset:cmotions/NL_restaurant_reviews",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"region:us"
] | null | 2025-04-06T20:09:48Z | ---
language:
- multilingual
tags:
- dutch
- multilingual
- restaurant-reviews
- sentiment-analysis
- multi-head
- xlm-roberta
- regression
datasets:
- cmotions/NL_restaurant_reviews
base_model:
- FacebookAI/xlm-roberta-base
---
# Restaurant Review Analyzer for Dutch Reviews (Multilingual)
This model analyzes restaurant reviews and predicts scores across three dimensions:
- Taste
- Service
- Ambiance
The model is based on XLM-RoBERTa, which provides multilingual capabilities, allowing it to potentially work with reviews in different languages, although it was primarily trained on Dutch reviews.
## Model Description
This is a multi-head regression model designed for restaurant review analysis. It uses XLM-RoBERTa as the encoder backbone with custom regression heads for each dimension. The model extracts semantic information from restaurant reviews and predicts quality scores for different aspects of the restaurant experience.
### Key Features
- **Multi-dimensional scoring**: Predicts scores for multiple restaurant aspects simultaneously
- **Multilingual capabilities**: Based on XLM-RoBERTa which supports 100+ languages
- **Transfer learning**: Benefits from the pre-trained knowledge of XLM-RoBERTa
- **Compact architecture**: Efficient design with minimal additional parameters beyond the base model
## Performance
The model achieves the following performance metrics on Dutch restaurant reviews:
| Dimension | MSE | MAE | R² |
|-----------|--------|--------|--------|
| Taste | 1.0103 | 0.7518 | 0.7719 |
| Service | 1.1899 | 0.8194 | 0.7643 |
| Ambiance | 1.3515 | 0.8741 | 0.4948 |
| Overall | 1.1839 | 0.8151 | 0.6770 |
## Baseline Comparison
To validate the effectiveness of our approach, we compared the XLM-RoBERTa model against a simple baseline model that uses TF-IDF vectorization and Ridge regression. Here's how our model performs relative to the baseline:
| Metric | Improvement over Baseline |
|--------|---------------------------|
| MSE | ~34.81% reduction |
| MAE | ~20.73% reduction |
| R² | ~29.50% increase |
The baseline model represents a traditional approach to review analysis using bag-of-words representations, which fails to capture the semantic relationships between words that our transformer-based model excels at.
### Performance Comparison Visualization

### Advantages over Baseline
- **Contextual understanding**: The XLM-RoBERTa model understands words in context, allowing it to better interpret nuanced expressions
- **Cross-lingual transfer**: Unlike the baseline, our model can leverage knowledge from other languages
- **Handling of negations**: The model correctly interprets negative phrases that bag-of-words models struggle with
- **Long-range dependencies**: Can understand relationships between parts of a sentence that are far apart
The significant performance improvement over the baseline demonstrates the value of using transformer-based architectures for this task, especially in multilingual contexts.
## Comparison with Large Language Models (Zero-Shot)
To evaluate our model's effectiveness in a broader context, we compared its performance against general-purpose large language models (LLMs) using zero-shot prompting on Dutch restaurant reviews. This comparison helps illustrate the value of domain-specific fine-tuning versus larger, more general models.
**Comparison Models:**
* Custom Model (Fine-tuned XLM-RoBERTa)
* `Qwen/Qwen2.5-3B`
* `Qwen/Qwen2.5-7B`
**Key Findings (based on 50 samples):**
* **Fine-tuned Model Outperforms Larger LLMs:** Despite having far fewer parameters (250M vs 3B/7B), our domain-specific model achieved lower error rates and higher correlation with human ratings across all dimensions.
* **Increasing Model Size Doesn't Guarantee Better Performance:** Interestingly, the larger 7B model often performed worse than the 3B model, particularly for taste scores, highlighting that domain specialization can be more important than scale.
* **Ambiance Detection Gap:** The most dramatic performance difference was in ambiance prediction, where our specialized model achieved an R² of 0.61 compared to only 0.05 for Qwen 3B and 0.17 for Qwen 7B.
**Performance Visualization:**

*(Based on evaluation using the official test split from the NL_restaurant_reviews dataset)*
**Detailed Comparison Metrics:**
| Model | Dimension | MSE ↓ | MAE ↓ | R² ↑ |
|-------|-----------|-------|-------|------|
| **Custom Model** | Taste | **1.06** | **0.76** | **0.76** |
| | Service | **1.40** | **0.91** | **0.72** |
| | Ambiance | **1.27** | **0.84** | **0.61** |
| **Qwen 2.5-3B** | Taste | 1.51 | 0.91 | 0.65 |
| | Service | 1.82 | 0.95 | 0.64 |
| | Ambiance | 3.10 | 1.25 | 0.05 |
| **Qwen 2.5-7B** | Taste | 1.98 | 1.13 | 0.54 |
| | Service | 2.05 | 1.01 | 0.59 |
| | Ambiance | 2.71 | 1.11 | 0.17 |
This comparison demonstrates that our specialized model achieves:
- 30-42% lower MSE compared to the LLMs
- 16-33% lower MAE across all dimensions
- Significantly higher R² values, especially for ambiance prediction
These results validate our approach of fine-tuning a smaller multilingual model specifically for restaurant review analysis rather than relying on general-purpose LLMs, providing superior performance with greater efficiency.
## Training Details
- **Base Model**: xlm-roberta-base (250M parameters)
- **Training Dataset**: [NL_restaurant_reviews](https://huggingface.co/datasets/cmotions/NL_restaurant_reviews)
- **Training Procedure**:
- Fine-tuned using MSE loss
- Optimizer: AdamW with weight decay 0.001
- Learning rate: 2e-5 for encoder, 6e-5 for regression heads
- Early stopping based on validation loss
- Gradient accumulation with accumulation steps = 4
- Trained with weighted loss, emphasizing the Ambiance dimension (weight 1.5)
## Limitations and Biases
- The model was primarily trained on Dutch restaurant reviews and may perform less effectively on other languages
- Although XLM-RoBERTa supports 100+ languages, performance will vary based on language representation in the pre-training data
- Scores are predicted on a 1-10 scale but may exhibit bias toward certain score ranges
- May not capture cultural nuances in restaurant reviews from different regions
- Models used in comparison can be more diverse and robust
## Intended Use Cases
This model is designed for:
- Restaurant review aggregation and summarization
- Customer feedback analysis for restaurant owners
- Market research in the hospitality industry
- Cross-lingual restaurant review understanding
- User experience evaluation for dining establishments
## Languages
While trained primarily on Dutch data, the XLM-RoBERTa backbone has potential capabilities in these languages (among others):
- Dutch (primary)
- English
- German
- French
- Spanish
- Portuguese
- Italian
## Model Details
- **Model Type**: Multi-head regression model
- **Encoder**: XLM-RoBERTa Base (xlm-roberta-base)
- **Output Heads**: 3 separate regression heads (Taste, Service, Ambiance)
- **Parameters**: ~250M (mostly from XLM-RoBERTa)
- **Context Length**: 512 tokens
- **Output**: Scores on a 1-10 scale for each dimension
## Usage
Using this model requires defining a custom Python class (`RestaurantReviewAnalyzer`) in your environment *before* loading the model. You'll initialize this class, which loads the base encoder weights, and then manually load the custom regression head weights from the `regression_heads.json` file.
**1. Prerequisites:**
First, ensure you have the necessary libraries installed:
```bash
pip install torch transformers huggingface_hub
```
**2. Define the Custom Model Class:**
You **must** include the following `RestaurantReviewAnalyzer` class definition in your Python script or notebook. This definition needs to be **identical** to the one used during the model's training.
```python
# --- Imports needed for the class ---
import torch
import torch.nn as nn
from transformers import AutoModel
# --- Custom Model Class Definition ---
class RestaurantReviewAnalyzer(nn.Module):
"""
A custom model that uses a pre-trained transformer encoder (like XLM-RoBERTa)
and adds separate regression heads to predict scores for different dimensions
of a restaurant review (Taste, Service, Ambiance).
"""
def __init__(self, pretrained_model_name="xlm-roberta-base", num_dimensions=3, dropout_prob=0.1):
super().__init__()
print(f"Initializing custom model structure with base: {pretrained_model_name}")
# Load the pre-trained base model specified by pretrained_model_name
self.encoder = AutoModel.from_pretrained(pretrained_model_name)
self.config = self.encoder.config
hidden_size = self.config.hidden_size # Get hidden size from the base model's config
# Define the names of the dimensions to predict
self.dimension_names = ["Taste", "Service", "Ambiance"] # Should match training setup
# Create a ModuleDict to hold the separate regression head for each dimension
self.regression_heads = nn.ModuleDict({
dim: nn.Sequential(
nn.Dropout(dropout_prob), # Dropout layer
nn.Linear(hidden_size, 64), # First linear layer
nn.GELU(), # Activation function
nn.Linear(64, 1) # Output linear layer (predicts a single value)
) for dim in self.dimension_names[:num_dimensions]
})
print("Custom regression heads structure created.")
# Define the forward pass: how input data flows through the model
def forward(self, input_ids, attention_mask=None):
# Pass input through the base encoder
encoder_output = self.encoder(
input_ids=input_ids,
attention_mask=attention_mask
)
# Use the output corresponding to the [CLS] token as the pooled representation
# Shape: [batch_size, hidden_size]
pooled_output = encoder_output.last_hidden_state[:, 0]
results = {}
# Pass the pooled output through each dimension's regression head
for dim in self.dimension_names:
score = self.regression_heads[dim](pooled_output)
# Apply sigmoid and scale the output to be between 1.0 and 10.0
results[dim] = 1.0 + 9.0 * torch.sigmoid(score)
# Remove the last dimension (shape becomes [batch_size])
results[dim] = results[dim].squeeze(-1)
return results # Return a dictionary {'DimensionName': scores_tensor, ...}
```
**3. Load Tokenizer, Model, and Weights:**
Load the tokenizer, initialize the model structure (this loads the base XLM-R weights), determine the device (`cuda` or `cpu`), move the model to the device, and then load the custom regression head weights from `regression_heads.json`.
```python
# --- Further imports ---
import torch
from transformers import AutoTokenizer
import json
from huggingface_hub import hf_hub_download
# --- Configuration ---
repo_id = "c0sm1c9/restaurant-review-analyzer-dutch"
# --- Load Tokenizer ---
# The tokenizer converts text into numerical IDs that the model understands.
print(f"Loading tokenizer from: {repo_id}")
tokenizer = AutoTokenizer.from_pretrained(repo_id)
print("Tokenizer loaded.")
# --- Initialize Model Structure ---
# This creates an instance of your custom RestaurantReviewAnalyzer class.
# The `AutoModel.from_pretrained(pretrained_model_name)` inside the __init__
# loads the weights of the base model (e.g., xlm-roberta-base) from the repo_id.
print("Initializing model structure (loads base encoder weights)...")
model = RestaurantReviewAnalyzer(pretrained_model_name=repo_id)
print("Model structure initialized.")
# --- Determine Device ---
# Choose the device to run the model on: GPU (cuda) if available, otherwise CPU.
# It's crucial that the model and input data reside on the same device.
model_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"\nTarget device selected: {model_device}")
# --- Move Model to Device ---
# Move the entire model (including base encoder and regression heads) to the chosen device.
model.to(model_device)
print(f"Model moved to {model_device}.")
# --- Load Custom Regression Head Weights ---
# These weights were trained specifically for the regression task and are stored separately.
try:
regression_heads_filename = "regression_heads.json" # The name of the weights file in the repo
print(f"Downloading custom weights '{regression_heads_filename}'...")
# Download the file from the Hugging Face Hub
regression_heads_path = hf_hub_download(
repo_id=repo_id,
filename=regression_heads_filename
)
print(f"Downloaded weights file to: {regression_heads_path}")
# Load the weights from the downloaded JSON file
print("Loading weights from JSON file...")
with open(regression_heads_path, 'r') as f:
regression_heads_dict_from_json = json.load(f)
print("JSON weights data loaded.")
# Convert the loaded data (lists) back into a PyTorch state_dict
# A state_dict maps parameter names (strings) to their tensor values.
regression_heads_state_dict = {}
print("Converting JSON weights to tensors on target device...")
# Iterate through dimensions ('Taste', 'Service', 'Ambiance') in the JSON data
for dim_name, params in regression_heads_dict_from_json.items():
# Check if the dimension exists in our model's regression heads
if dim_name in model.regression_heads:
# Get the state_dict of the corresponding head in the *current* model
# This helps ensure we use the correct parameter names, shapes, and dtypes.
layer_state_dict = model.regression_heads[dim_name].state_dict()
# Iterate through parameters ('1.weight', '1.bias', '3.weight', '3.bias', etc.) for this dimension
for param_name, param_value_list in params.items():
# Find the matching parameter key in the model's layer state_dict
# This handles potential key name differences (e.g., due to ModuleDict prefixing)
for model_param_key in layer_state_dict.keys():
if model_param_key == param_name or model_param_key.endswith("." + param_name):
# Get the target data type and shape from the model's parameter
target_dtype = layer_state_dict[model_param_key].dtype
target_shape = layer_state_dict[model_param_key].shape
# Create the tensor directly on the target device (model_device) and with the correct dtype
tensor_value = torch.tensor(param_value_list, dtype=target_dtype, device=model_device)
# Verify the number of elements matches before reshaping (safety check)
if tensor_value.numel() != target_shape.numel():
raise RuntimeError(f"Shape mismatch for {dim_name}.{model_param_key}: JSON({tensor_value.numel()}) vs Model({target_shape.numel()})")
# Reshape the tensor to match the model's parameter shape
tensor_value = tensor_value.view(target_shape)
# Store the tensor in the state_dict using the model's full key name (e.g., 'Taste.1.weight')
regression_heads_state_dict[f"{dim_name}.{model_param_key}"] = tensor_value
break # Found the matching key, move to the next parameter in the JSON
# Load the constructed state_dict into the `regression_heads` part of the model
# `strict=True` ensures all keys match between the state_dict and the model module.
print("Applying weights to the model's regression heads...")
model.regression_heads.load_state_dict(regression_heads_state_dict, strict=True)
print("Regression head weights loaded successfully into the model.")
print("Model is ready for inference.")
except Exception as e:
print(f"ERROR during weight loading: {e}")
print("Please check the model files and class definition.")
# Depending on your application, you might want to handle this error more gracefully
raise e # Re-raise the exception to halt execution if loading fails
```
**4. Perform Inference:**
Now you can use the fully loaded model to predict scores for new reviews. Remember to move the tokenized input tensors to the same device as the model.
```python
# --- Example Inference ---
print("\n--- Starting Example Inference ---")
# Set the model to evaluation mode (important for consistent results)
# This disables mechanisms like dropout that are only used during training.
model.eval()
# Example Dutch restaurant review
review = "Heerlijk gegeten bij dit restaurant! De service was top en de sfeer gezellig."
# English: "Ate wonderfully at this restaurant! The service was great and the atmosphere cozy."
print(f"Input Review: '{review}'")
# Tokenize the input text using the loaded tokenizer
print("Tokenizing the input review...")
# `return_tensors="pt"` specifies PyTorch tensors as output.
# `padding=True` pads the sequence to the maximum length in the batch (or max_length).
# `truncation=True` cuts off text longer than max_length.
# `max_length=512` is a common sequence length limit for BERT-like models.
inputs = tokenizer(review, return_tensors="pt", padding=True, truncation=True, max_length=512)
# `inputs` is now a dictionary containing 'input_ids' and 'attention_mask' tensors.
# --- CRITICAL STEP: Move Input Tensors to the Model's Device ---
# Both the model and its input data *must* be on the same device (CPU or GPU).
print(f"Moving input tensors to {model_device}...")
inputs = {k: v.to(model_device) for k, v in inputs.items()}
print("Input tensors moved.")
# Perform inference without calculating gradients
# `torch.no_grad()` reduces memory usage and speeds up computation during inference.
print("Performing inference with the model...")
with torch.no_grad():
# Pass the prepared inputs to the model
outputs = model(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"])
# `outputs` is the dictionary returned by the model's forward method:
# e.g., {'Taste': tensor([9.2], device='cuda:0'), ...}
# Process and display the results
print("\nPredicted Scores (Scale 1-10):")
for dim, score_tensor in outputs.items():
# Use `.item()` to extract the single numerical value from the tensor
# Format the float to one decimal place using f-string formatting
print(f" {dim}: {score_tensor.item():.1f}")
print("\n--- Inference Complete ---")
# Example Output (scores may vary slightly):
# Predicted Scores (Scale 1-10):
# Taste: 9.2
# Service: 9.5
# Ambiance: 8.8
```
## Citation
If you use this model in your research, please cite:
```
@misc{restaurant-review-analyzer-dutch,
author = {Haitao Tao},
title = {Restaurant Review Analyzer (Multilingual)},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/c0sm1c9/restaurant-review-analyzer-dutch}}
}
```
## Acknowledgements
- XLM-RoBERTa base model by Facebook AI Research
- Dutch restaurant reviews dataset by cmotions
- Hugging Face for the model hosting infrastructure
|
christopherthompson81/Qwen3-30B-A3B-UD-Q4_K_XL_split | christopherthompson81 | 2025-05-05T14:57:53Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-05T14:32:35Z | ---
license: apache-2.0
---
|
AlbertWayne/llama381binstruct_summarize_short_merged | AlbertWayne | 2025-05-05T14:57:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-05T14:54:22Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WizardofOz/ppo-LunarLander-v2 | WizardofOz | 2025-05-05T14:55:23Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-05T14:55:05Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.55 +/- 9.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
henryhe0123/pc-agent-72b | henryhe0123 | 2025-05-05T14:55:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:henryhe0123/pc-agent-72b",
"base_model:finetune:henryhe0123/pc-agent-72b",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-05T01:12:00Z | ---
library_name: transformers
license: other
base_model: henryhe0123/pc-agent-72b
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen2.5-VL-72B-sft-40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-VL-72B-sft-40
This model is a fine-tuned version of [/inspire/hdd/global_user/liupengfei-24025/yhhe/model/Qwen2.5-VL-72B-Instruct](https://huggingface.co//inspire/hdd/global_user/liupengfei-24025/yhhe/model/Qwen2.5-VL-72B-Instruct) on the pcagent40 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
NekoJar/vit-Facial-Expression-Recognition | NekoJar | 2025-05-05T14:54:31Z | 1 | 0 | null | [
"safetensors",
"vit",
"generated_from_trainer",
"base_model:mo-thecreator/vit-Facial-Expression-Recognition",
"base_model:finetune:mo-thecreator/vit-Facial-Expression-Recognition",
"region:us"
] | null | 2024-12-01T01:59:42Z | ---
base_model: motheecreator/vit-Facial-Expression-Recognition
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-Facial-Expression-Recognition
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-Facial-Expression-Recognition
This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 274615179022269209159025612029952.0000
- Accuracy: 0.2031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:--------------------------------------:|:-----:|:----:|:--------------------------------------:|:--------:|
| 263990515307201053637938614632448.0000 | 1.1 | 100 | 274631465670911057443267253633024.0000 | 0.2030 |
| 261320303401263261122219774836736.0000 | 2.2 | 200 | 274623651174413068480281952911360.0000 | 0.2031 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
silviasapora/gemma-7b-cpo-noisy-5e-5-005-v140 | silviasapora | 2025-05-05T14:53:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:silviasapora/dpo_7k_noisy_10",
"arxiv:2403.07691",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T13:51:37Z | ---
base_model: google/gemma-7b
datasets:
- silviasapora/dpo_7k_noisy_10
library_name: transformers
model_name: google/gemma-7b
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for google/gemma-7b
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the [['silviasapora/dpo_7k_noisy_10']](https://huggingface.co/datasets/['silviasapora/dpo_7k_noisy_10']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-cpo-noisy-5e-5-005-v140", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/wwytxuf9)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.1
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
cuongdk253/gemma-3-12b | cuongdk253 | 2025-05-05T14:52:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T14:51:51Z | ---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** cuongdk253
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/Hamanasu-Magnum-QwQ-32B-Q8_0-GGUF | Triangle104 | 2025-05-05T14:51:17Z | 0 | 0 | null | [
"gguf",
"qwen",
"roleplay",
"finetune",
"storywriting",
"llama-cpp",
"gguf-my-repo",
"dataset:NewEden/Orion-LIT",
"dataset:NewEden/Orion-Asstr-Stories-16K",
"dataset:Mielikki/Erebus-87k",
"dataset:NewEden/RP-logs-V2-Experimental-prefixed",
"dataset:NewEden/Creative_Writing-Complexity",
"dataset:NewEden/Discord-Filtered",
"dataset:NewEden/DeepseekRP-Filtered",
"dataset:NewEden/Storium-Prefixed-Clean",
"dataset:NewEden/Basket-Weaving-Filtered",
"dataset:NewEden/LIMARP-Complexity",
"dataset:NewEden/Misc-Data-Sharegpt-Prefixed",
"dataset:NewEden/BlueSky-10K-Complexity",
"dataset:NewEden/OpenCAI-ShareGPT",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"dataset:PocketDoc/Dans-Kinomaxx-VanillaBackrooms",
"dataset:PocketDoc/Dans-Personamaxx-Logs",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:lodrick-the-lafted/kalo-opus-instruct-3k-filtered",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"dataset:anthracite-org/kalo_opus_misc_240827",
"dataset:anthracite-org/kalo_misc_part2",
"dataset:NewEden/Claude-Instruct-5K",
"dataset:NewEden/Claude-Instruct-2.7K",
"base_model:Delta-Vector/Hamanasu-Magnum-QwQ-32B",
"base_model:quantized:Delta-Vector/Hamanasu-Magnum-QwQ-32B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T14:46:45Z | ---
base_model: Delta-Vector/Hamanasu-Magnum-QwQ-32B
datasets:
- NewEden/Orion-LIT
- NewEden/Orion-Asstr-Stories-16K
- Mielikki/Erebus-87k
- NewEden/RP-logs-V2-Experimental-prefixed
- NewEden/Creative_Writing-Complexity
- NewEden/Discord-Filtered
- NewEden/DeepseekRP-Filtered
- NewEden/Storium-Prefixed-Clean
- NewEden/Basket-Weaving-Filtered
- NewEden/LIMARP-Complexity
- NewEden/Misc-Data-Sharegpt-Prefixed
- NewEden/BlueSky-10K-Complexity
- NewEden/OpenCAI-ShareGPT
- NewEden/Basket-Weaving-Filtered
- PocketDoc/Dans-Personamaxx-VN
- PocketDoc/Dans-Kinomaxx-VanillaBackrooms
- PocketDoc/Dans-Personamaxx-Logs
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- lodrick-the-lafted/kalo-opus-instruct-3k-filtered
- anthracite-org/nopm_claude_writing_fixed
- anthracite-org/kalo_opus_misc_240827
- anthracite-org/kalo_misc_part2
- NewEden/Claude-Instruct-5K
- NewEden/Claude-Instruct-2.7K
tags:
- qwen
- roleplay
- finetune
- storywriting
- llama-cpp
- gguf-my-repo
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/jg2NWmCUfPyzizm2USjMt.jpeg
---
# Triangle104/Hamanasu-Magnum-QwQ-32B-Q8_0-GGUF
This model was converted to GGUF format from [`Delta-Vector/Hamanasu-Magnum-QwQ-32B`](https://huggingface.co/Delta-Vector/Hamanasu-Magnum-QwQ-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Delta-Vector/Hamanasu-Magnum-QwQ-32B) for more details on the model.
---
This model is a finetune of Hamanasu-QwQ-V2-RP to replicate the prose of Claude models, Opus and Sonnet. Read more about the model's training on my blog : https://openai-sucks.bearblog.dev/. The model is suited for traditional RP, All thanks to Ruka-Hamanasu for funding the train.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Hamanasu-Magnum-QwQ-32B-Q8_0-GGUF --hf-file hamanasu-magnum-qwq-32b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Hamanasu-Magnum-QwQ-32B-Q8_0-GGUF --hf-file hamanasu-magnum-qwq-32b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Hamanasu-Magnum-QwQ-32B-Q8_0-GGUF --hf-file hamanasu-magnum-qwq-32b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Hamanasu-Magnum-QwQ-32B-Q8_0-GGUF --hf-file hamanasu-magnum-qwq-32b-q8_0.gguf -c 2048
```
|
mlfoundations-dev/d1_code_fasttext_1k | mlfoundations-dev | 2025-05-05T14:50:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T12:14:18Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_code_fasttext_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_code_fasttext_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_code_fasttext_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
stabgan/gemma-3-1b-pt-chkpt-v5-dosage | stabgan | 2025-05-05T14:49:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:stabgan/gemma-3-1b-pt-chkpt-v4",
"base_model:finetune:stabgan/gemma-3-1b-pt-chkpt-v4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T14:48:46Z | ---
base_model: stabgan/gemma-3-1b-pt-chkpt-v4
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** stabgan
- **License:** apache-2.0
- **Finetuned from model :** stabgan/gemma-3-1b-pt-chkpt-v4
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlfoundations-dev/d1_code_gpt_1k | mlfoundations-dev | 2025-05-05T14:48:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T12:13:37Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_code_gpt_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_code_gpt_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_code_gpt_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
bruhzair/ignore-base2 | bruhzair | 2025-05-05T14:48:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T13:42:20Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# base2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--TheDrummer--L3.3-Interleaved-Upscale-105B/snapshots/dc1c192564ddf43133a71a7bdc8e6e91c69a2835 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/nemo2
* /workspace/deep2
* /workspace/hydro2
* /workspace/cache/models--bruhzair--ignore-merge-17/snapshots/bd0af76a6bc4d9ae4bab5fa6b50e6545e6f3fd4f
* /workspace/herme2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: /workspace/cache/models--TheDrummer--L3.3-Interleaved-Upscale-105B/snapshots/dc1c192564ddf43133a71a7bdc8e6e91c69a2835
chat_template: llama3
dtype: float32
merge_method: sce
modules:
default:
slices:
- sources:
- layer_range: [0, 120]
model: /workspace/hydro2
parameters:
select_topk: 0.3
- layer_range: [0, 120]
model: /workspace/nemo2
parameters:
select_topk: 0.3
- layer_range: [0, 120]
model: /workspace/herme2
parameters:
select_topk: 0.35
- layer_range: [0, 120]
model: /workspace/cache/models--bruhzair--ignore-merge-17/snapshots/bd0af76a6bc4d9ae4bab5fa6b50e6545e6f3fd4f
parameters:
select_topk: 0.25
- layer_range: [0, 120]
model: /workspace/deep2
parameters:
select_topk: 0.3
- layer_range: [0, 120]
model: /workspace/cache/models--TheDrummer--L3.3-Interleaved-Upscale-105B/snapshots/dc1c192564ddf43133a71a7bdc8e6e91c69a2835
parameters:
select_topk: 0.15
out_dtype: bfloat16
parameters:
int8_mask: 1.0
tokenizer:
source: base
```
|
Lelon/scope-bert-german | Lelon | 2025-05-05T14:47:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-05T14:47:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gimmy256/Qwen3-14B_lora | gimmy256 | 2025-05-05T14:43:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T13:02:16Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** gimmy256
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rushikesh323/newmode | rushikesh323 | 2025-05-05T14:31:28Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T14:28:09Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** rushikesh323
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ipranavks/unsloth_finetune | ipranavks | 2025-05-05T14:30:18Z | 0 | 0 | transformers | [
"transformers",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-05T14:29:34Z | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ipranavks
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jssky/79ea806e-d8ac-4668-bca2-00f0c6fdb335 | jssky | 2025-05-05T14:29:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:finetune:NousResearch/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T14:07:39Z | ---
library_name: transformers
base_model: NousResearch/Llama-2-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 79ea806e-d8ac-4668-bca2-00f0c6fdb335
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.0`
```yaml
base_model: NousResearch/Llama-2-7b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 012ab4813cc99fb8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/012ab4813cc99fb8_train_data.json
type:
field_input: evidence
field_instruction: question
field_output: SQL
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: jssky/79ea806e-d8ac-4668-bca2-00f0c6fdb335
hub_repo: null
hub_strategy: checkpoint
hub_token: null
huggingface_repo_visibility: public
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lr_scheduler: cosine
max_steps: 1000
micro_batch_size: 8
mlflow_experiment_name: /tmp/012ab4813cc99fb8_train_data.json
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: false
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: lora3278-252e-44d7-9491-1b28d344421c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: lora3278-252e-44d7-9491-1b28d344421c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 79ea806e-d8ac-4668-bca2-00f0c6fdb335
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 437
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9649 | 0.0229 | 1 | 0.9458 |
| 0.096 | 3.48 | 150 | 0.3537 |
| 0.0047 | 6.96 | 300 | 0.3649 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Iredteam/Feather-payload-chatbot | Iredteam | 2025-05-05T14:29:43Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-05-05T14:23:18Z | ---
license: mit
---
⚠️ This project demonstrates how Joblib serialization can be abused to execute reverse shell payloads. For educational and red teaming only.
# Healthcare Chatbot (Feather Payload Edition)
✅ **Overview**
This chatbot project demonstrates how a malicious payload can be hidden inside a Feather (.feather) file format, often used in data science workflows. The chatbot uses a modified Q&A dataset where the payload is executed upon loading.
✅ **Important:** This is for **educational research** only. Do not execute untrusted Feather files.
---
## 🚀 How to Run
### 1. Generate the Feather Payload
```bash
python generate_data_feather.py
```
### 2. Launch the Chatbot
```bash
streamlit run healthcare_chatbot_feather.py
```
A reverse shell connection will attempt to connect back to the attacker's machine as the Feather file is deserialized.
---
## 📂 File Structure
- `generate_data_feather.py`: Creates a malicious Feather file.
- `train_data_mod_obfuscated_fixed.feather`: The resulting Feather file.
- `healthcare_chatbot_feather.py`: Loads the payload during chatbot startup.
---
## 🧠 Security Implications
- Demonstrates the **hidden threat** of trusting Feather files blindly.
- Many blue teams and EDRs ignore Feather files.
- Shows how scientific formats can be abused for stealth payload delivery.
---
## 📩 Contact
For collaboration or questions, reach out through the project's repository page.
|
istominvi/swtsall_250_16_32 | istominvi | 2025-05-05T14:28:28Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-05T14:28:26Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: swtsall
---
# Swtsall_250_16_32
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `swtsall` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "swtsall",
"lora_weights": "https://huggingface.co/istominvi/swtsall_250_16_32/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('istominvi/swtsall_250_16_32', weight_name='lora.safetensors')
image = pipeline('swtsall').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1766
- Learning rate: 0.0004
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/istominvi/swtsall_250_16_32/discussions) to add images that show off what you’ve made with this LoRA.
|
alibelhrak/LSTM_MODEL_for_2024_US_Election_Sentiment_on_X | alibelhrak | 2025-05-05T14:25:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-05T14:25:54Z | ---
license: apache-2.0
---
|
roshanrb001/qwen-lora-model-3b-adapter | roshanrb001 | 2025-05-05T14:25:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"base_model:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T14:25:34Z | ---
base_model: unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** roshanrb001
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
exala/db_fe2_9.2.1d | exala | 2025-05-05T14:24:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-05T13:44:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
elicara/vincentlora | elicara | 2025-05-05T14:23:57Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-05T14:21:46Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
Close-up photo of a man<lora:vincent:0.75> vincentlora, looks at camera,
simple bedroom, cinematic, <lora:amateurphoto-6version:0.0>
output:
url: images/Vincent_Portrait_003.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: vincentlora
---
# Vincent
<Gallery />
## Model description
Vincent
## Trigger words
You should use `vincentlora` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/elicara/vincentlora/tree/main) them in the Files & versions tab.
|
kamelcharaf/SFT-mistral-7B-mrd3 | kamelcharaf | 2025-05-05T14:22:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T16:44:58Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.3
library_name: transformers
model_name: SFT-mistral-7B-mrd3
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for SFT-mistral-7B-mrd3
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kamelcharaf/SFT-mistral-7B-mrd3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kamel-charaf-epfl/huggingface/runs/1auu7gxf)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.48.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf | RichardErkhov | 2025-05-05T14:21:33Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T10:54:22Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924 - GGUF
- Model creator: https://huggingface.co/KONIexp/
- Original model: https://huggingface.co/KONIexp/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q2_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q2_K.gguf) | Q2_K | 2.96GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.IQ3_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.IQ3_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q3_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q3_K.gguf) | Q3_K | 3.74GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q4_0.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q4_0.gguf) | Q4_0 | 4.34GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q4_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q4_K.gguf) | Q4_K | 4.58GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q4_1.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q4_1.gguf) | Q4_1 | 4.78GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q5_0.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q5_0.gguf) | Q5_0 | 5.21GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q5_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q5_K.gguf) | Q5_K | 5.34GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q5_1.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q5_1.gguf) | Q5_1 | 5.65GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q6_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q6_K.gguf) | Q6_K | 6.14GB |
| [v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q8_0.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_05_0000005_09_based_on_llama3_1_8b_20240924.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/QwQ-Gutenberg-Doppel-0.5 | Triangle104 | 2025-05-05T14:17:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Qwen/QwQ-32B",
"base_model:merge:Qwen/QwQ-32B",
"base_model:nbeerbower/Qwen2.5-Gutenberg-Doppel-32B",
"base_model:merge:nbeerbower/Qwen2.5-Gutenberg-Doppel-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T13:57:57Z | ---
base_model:
- nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
- Qwen/QwQ-32B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) as a base.
### Models Merged
The following models were included in the merge:
* [nbeerbower/Qwen2.5-Gutenberg-Doppel-32B](https://huggingface.co/nbeerbower/Qwen2.5-Gutenberg-Doppel-32B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
parameters:
density: 0.5
weight: 0.5
- model: Qwen/QwQ-32B
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: Qwen/QwQ-32B
parameters:
normalize: false
int8_mask: true
dtype: float16
``` |
azatovhikmatyor/wav2vec2_test | azatovhikmatyor | 2025-05-05T14:17:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-05T13:58:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SujitShelar/llama3-medchat-8b-lora | SujitShelar | 2025-05-05T14:15:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama-3",
"4bit",
"unsloth",
"peft",
"lora",
"medical",
"question-answering",
"bf16",
"a100",
"bitsandbytes",
"fine-tuned",
"healthcare",
"instruction-tuned",
"chat",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-05-05T05:40:09Z | ---
library_name: transformers
tags:
- llama-3
- 4bit
- unsloth
- peft
- lora
- medical
- question-answering
- bf16
- a100
- bitsandbytes
- fine-tuned
- healthcare
- instruction-tuned
- chat
---
# Model Card for Model ID
A 4-bit quantized, LoRA-adapted version of Meta's LLaMA 3 8B model, fine-tuned on the medalpaca/medical_meadow_medical_flashcards dataset for medical question-answering tasks. This model is optimized for efficient inference and training on hardware like NVIDIA A100 GPUs using BF16 precision.
## Model Details
- **Base model:** unsloth/llama-3-8b-bnb-4bit
- **Fine-tuned by:** Sujit Shelar
- **Model type:** Auto-regressive transformer (decoder-only)
- **Quantization:** 4-bit NF4 via bitsandbytes
- **PEFT:** LoRA (r=4, alpha=8, dropout=0.01)
- **Language:** English
- **License:** LLaMA 3 Community License
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** SujitShelar/llama3-medchat-8b-lora
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is intended for generating concise, accurate answers to medical questions, making it suitable for applications like:
Medical education tools
Clinical decision support systems
Healthcare chatbots
Medical flashcard applications
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hungtran0509/a2c-PandaReachDense-v3 | hungtran0509 | 2025-05-05T14:15:29Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-05T14:12:09Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -14.14 +/- 4.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Lelon/scope-bert | Lelon | 2025-05-05T14:14:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-05T14:14:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
roshanrb001/qwen-lora-model-3b | roshanrb001 | 2025-05-05T14:14:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-05T14:08:10Z | ---
base_model: unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** roshanrb001
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Misaka27260/Qwen2.5-VL-7B-Instruct-abliterated-GGUF | Misaka27260 | 2025-05-05T14:13:07Z | 813 | 1 | null | [
"gguf",
"qwen2_5_vl",
"multimodal",
"abliterated",
"uncensored",
"text-generation-inference",
"image-text-to-text",
"zh",
"en",
"base_model:huihui-ai/Qwen2.5-VL-7B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2.5-VL-7B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-04-13T00:39:29Z | ---
license: apache-2.0
language:
- zh
- en
base_model:
- huihui-ai/Qwen2.5-VL-7B-Instruct-abliterated
pipeline_tag: image-text-to-text
tags:
- qwen2_5_vl
- multimodal
- abliterated
- uncensored
- text-generation-inference
---
Quantized gguf file from https://huggingface.co/huihui-ai/Qwen2.5-VL-7B-Instruct-abliterated
Using "--leave-output-tensor" in quantizing to keep output layer precision at FP16.
LM Studio is recommended to deploy it.
**Runtime environment should be upgraded to >= 1.29.0(beta).**
imatrix.dat is from mradermacher/Qwen2.5-VL-7B-Instruct-abliterated-i1-GGUF |
BootesVoid/cmab4oh8o005rdwgp0z6gafla_cmab4r6m3005ydwgp7crygfp7 | BootesVoid | 2025-05-05T14:12:23Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-05T14:12:21Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SARAH
---
# Cmab4Oh8O005Rdwgp0Z6Gafla_Cmab4R6M3005Ydwgp7Crygfp7
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SARAH` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SARAH",
"lora_weights": "https://huggingface.co/BootesVoid/cmab4oh8o005rdwgp0z6gafla_cmab4r6m3005ydwgp7crygfp7/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmab4oh8o005rdwgp0z6gafla_cmab4r6m3005ydwgp7crygfp7', weight_name='lora.safetensors')
image = pipeline('SARAH').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmab4oh8o005rdwgp0z6gafla_cmab4r6m3005ydwgp7crygfp7/discussions) to add images that show off what you’ve made with this LoRA.
|
MAAT-EL-DUAT/LADY-GODIVA-CHOCALATES | MAAT-EL-DUAT | 2025-05-05T14:11:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-05T14:06:40Z | 


|
jnjj/model-v1 | jnjj | 2025-05-05T14:10:02Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-05T14:10:01Z | # Model README
Continuously fine-tuned model.
|
JohnHoo/Qwen3-erparrange_model | JohnHoo | 2025-05-05T14:08:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T14:08:08Z | ---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JohnHoo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nanyaas/deepseek-r1-medicalQA-Qwen | nanyaas | 2025-05-05T14:08:15Z | 0 | 0 | null | [
"safetensors",
"unsloth",
"license:cc-by-4.0",
"region:us"
] | null | 2025-04-30T14:53:03Z | ---
license: cc-by-4.0
tags:
- unsloth
---
|
hi-go/xlm-roberta-base-finetuned-panx-all-v2 | hi-go | 2025-05-05T14:08:07Z | 0 | 0 | null | [
"pytorch",
"xlm-roberta",
"generated_from_trainer",
"token-classification",
"license:mit",
"region:us"
] | token-classification | 2025-05-05T12:58:30Z | ---
license: mit
tags:
- generated_from_trainer
- token-classification
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all-v2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1758
- F1: 0.8546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3028 | 1.0 | 835 | 0.1972 | 0.8054 |
| 0.1556 | 2.0 | 1670 | 0.1765 | 0.8422 |
| 0.1019 | 3.0 | 2505 | 0.1758 | 0.8546 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.6.0+cu118
- Datasets 1.16.1
- Tokenizers 0.21.1
|
YOYO-AI/Qwen2.5-32B-YOYO-reasoning-v3-Q4_K_M-GGUF | YOYO-AI | 2025-05-05T14:07:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:YOYO-AI/Qwen2.5-32B-YOYO-reasoning-v3",
"base_model:quantized:YOYO-AI/Qwen2.5-32B-YOYO-reasoning-v3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T14:05:56Z | ---
base_model: YOYO-AI/Qwen2.5-32B-YOYO-reasoning-v3
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# YOYO-AI/Qwen2.5-32B-YOYO-reasoning-v3-Q4_K_M-GGUF
This model was converted to GGUF format from [`YOYO-AI/Qwen2.5-32B-YOYO-reasoning-v3`](https://huggingface.co/YOYO-AI/Qwen2.5-32B-YOYO-reasoning-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/YOYO-AI/Qwen2.5-32B-YOYO-reasoning-v3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo YOYO-AI/Qwen2.5-32B-YOYO-reasoning-v3-Q4_K_M-GGUF --hf-file qwen2.5-32b-yoyo-reasoning-v3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo YOYO-AI/Qwen2.5-32B-YOYO-reasoning-v3-Q4_K_M-GGUF --hf-file qwen2.5-32b-yoyo-reasoning-v3-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo YOYO-AI/Qwen2.5-32B-YOYO-reasoning-v3-Q4_K_M-GGUF --hf-file qwen2.5-32b-yoyo-reasoning-v3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo YOYO-AI/Qwen2.5-32B-YOYO-reasoning-v3-Q4_K_M-GGUF --hf-file qwen2.5-32b-yoyo-reasoning-v3-q4_k_m.gguf -c 2048
```
|
mjs227/rltu_grpo_10_0_74-llama-merged | mjs227 | 2025-05-05T14:05:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T13:14:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ijterror/AnaPitFluxLora | ijterror | 2025-05-05T14:04:48Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-05T14:02:54Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ptyq
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Ana Mol Lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `ptyq` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
prithivMLmods/coreOCR-7B-050325-preview | prithivMLmods | 2025-05-05T14:02:48Z | 2 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"text-generation-inference",
"OCR",
"Pdf",
"Doc",
"Image",
"conversational",
"en",
"dataset:allenai/olmOCR-mix-0225",
"dataset:prithivMLmods/Opendoc1-Analysis-Recognition",
"dataset:prithivMLmods/Opendoc2-Analysis-Recognition",
"dataset:prithivMLmods/Openpdf-Analysis-Recognition",
"arxiv:2412.08746",
"arxiv:2309.00071",
"arxiv:2409.12191",
"arxiv:2308.12966",
"arxiv:2412.02210",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-03T08:37:05Z | ---
license: apache-2.0
datasets:
- allenai/olmOCR-mix-0225
- prithivMLmods/Opendoc1-Analysis-Recognition
- prithivMLmods/Opendoc2-Analysis-Recognition
- prithivMLmods/Openpdf-Analysis-Recognition
pipeline_tag: image-text-to-text
language:
- en
base_model:
- Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
tags:
- text-generation-inference
- OCR
- Pdf
- Doc
- Image
---

# **coreOCR-7B-050325-preview**
> The **coreOCR-7B-050325-preview** model is a fine-tuned version of **Qwen/Qwen2-VL-7B**, optimized for **Document-Level Optical Character Recognition (OCR)**, **long-context vision-language understanding**, and **accurate image-to-text conversion with mathematical LaTeX formatting**. Designed with a focus on high-fidelity visual-textual comprehension, this model enhances document parsing, structured data extraction, and complex visual reasoning.
# Key Enhancements
* **Advanced Document-Level OCR**: Accurately processes and extracts structured text from complex, multi-page documents including invoices, forms, and research papers.
* **Enhanced Long-Context Vision-Language Understanding**: Supports long-text retrieval and reasoning from documents and multimedia inputs, including dense text blocks, diagrams, and math content.
* **SoTA Understanding Across Image Resolutions**: Achieves state-of-the-art results on visual benchmarks including MathVista, DocVQA, RealWorldQA, and MTVQA.
* **Video Comprehension up to 20+ minutes**: Capable of high-quality video-based question answering, dialogue generation, and content summarization from long video sequences.
* **Device Control via Visual Commands**: With complex reasoning and perception capabilities, it can be integrated with devices like mobile phones or robots for visually grounded automation.
# Quick Start with Transformers
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen2VLForConditionalGeneration.from_pretrained(
"prithivMLmods/coreOCR-7B-050325-preview", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("prithivMLmods/coreOCR-7B-050325-preview")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
# Training Details
| Parameter | Value |
|-------------------------|----------------------------------------------------|
| **Dataset Size** | 274,209 samples (Modular Combination of Datasets) |
| **Model Architecture** | `Qwen2VLForConditionalGeneration` |
| **Hardware** | 2 × NVIDIA A100 SXM (with 32 vCPUs) |
| **Total Disk** | 160,000 MB |
| **Training Time** | 10,390 seconds (~2.88 hours) |
| **Learning Rate** | 1e-5 |
| **Scheduler** | Linear Decay |
| **Warmup Steps** | 700 |
| **Precision** | bfloat16 |
> [!note]
> The open dataset image-text response will be updated soon.
# Intended Use
This model is intended for:
* Document analysis and OCR from scanned images, PDFs, and camera input.
* Image-based question answering (e.g., educational content, diagrams, receipts).
* Math problem solving and LaTeX text generation from handwritten or printed math content.
* Long-context vision-text applications such as multi-slide document retrieval and dense information extraction.
* Multilingual OCR workflows for cross-lingual business documents and global data digitization.
* AI agents for mobile/robotic interaction through visual context.
# Limitations
* Performance may degrade on extremely noisy or low-resolution images.
* Not suitable for real-time inference on edge devices due to model size and memory demands.
* While multilingual, performance on low-resource or rare scripts may vary.
* Not optimized for high-speed processing of video streams in constrained environments.
* Contextual understanding depends on visual tokenization parameters; improper configuration may affect output quality.
* Outputs may occasionally include hallucinations or incomplete answers in long-context queries.
# References
- **DocVLM: Make Your VLM an Efficient Reader**
[https://arxiv.org/pdf/2412.08746v1](https://arxiv.org/pdf/2412.08746v1)
- **YaRN: Efficient Context Window Extension of Large Language Models**
[https://arxiv.org/pdf/2309.00071](https://arxiv.org/pdf/2309.00071)
- **Qwen2-VL: Enhancing Vision-Language Model’s Perception of the World at Any Resolution**
[https://arxiv.org/pdf/2409.12191](https://arxiv.org/pdf/2409.12191)
- **Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond**
[https://arxiv.org/pdf/2308.12966](https://arxiv.org/pdf/2308.12966)
- **A Comprehensive and Challenging OCR Benchmark for Evaluating Large Multimodal Models in Literacy**
[https://arxiv.org/pdf/2412.02210](https://arxiv.org/pdf/2412.02210) |
ARM-Development/Llama-3.1-8B-text-full-1.0 | ARM-Development | 2025-05-05T14:00:11Z | 0 | 0 | null | [
"safetensors",
"gguf",
"llama",
"dataset:ARM-Development/txt_extraction_results_FULL",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T11:56:26Z | ---
license: mit
datasets:
- ARM-Development/txt_extraction_results_FULL
--- |
VendyGo/llama3-8b-writter-testsv2 | VendyGo | 2025-05-05T13:59:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T13:59:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pribadihcr/sdxl_Tray_50um_5 | pribadihcr | 2025-05-05T13:59:08Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-05-05T12:53:07Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks Tray_50um
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - pribadihcr/sdxl_Tray_50um_5
<Gallery />
## Model description
These are pribadihcr/sdxl_Tray_50um_5 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks Tray_50um to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](pribadihcr/sdxl_Tray_50um_5/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
jspaulsen/orpheus-vctk-ft | jspaulsen | 2025-05-05T13:57:27Z | 206 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T03:14:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
memeviss/zombieXIX_6 | memeviss | 2025-05-05T13:54:22Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-05T11:56:41Z | # Optimized TTS Model
This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques.
## Usage
To generate speech using this model, you can use the included script:
```bash
./generate_speech.py --text "Your text here" --output_path output.wav
```
For more details, see the optimization report in this directory.
|
mlfoundations-dev/d1_code_all_large_0.3k | mlfoundations-dev | 2025-05-05T13:54:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T12:13:54Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_code_all_large_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_code_all_large_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_code_all_large_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mlfoundations-dev/d1_code_mc_llm_0.3k | mlfoundations-dev | 2025-05-05T13:53:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T12:13:13Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_code_mc_llm_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_code_mc_llm_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_code_mc_llm_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mlfoundations-dev/d1_code_all_0.3k | mlfoundations-dev | 2025-05-05T13:52:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T12:14:11Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_code_all_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_code_all_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_code_all_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mlfoundations-dev/d1_code_longest_0.3k | mlfoundations-dev | 2025-05-05T13:51:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T12:14:02Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_code_longest_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_code_longest_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_code_longest_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
EB1986/model-2 | EB1986 | 2025-05-05T13:51:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T13:47:02Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** EB1986
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sovitrath/receipt-ocr-full-ft | sovitrath | 2025-05-05T13:50:58Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"idefics3",
"ocr",
"vlm",
"base_model:HuggingFaceTB/SmolVLM-256M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolVLM-256M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-05-05T12:38:34Z | ---
license: apache-2.0
metrics:
- cer
base_model:
- HuggingFaceTB/SmolVLM-256M-Instruct
tags:
- ocr
- vlm
---
Check the GitHub project here => https://github.com/sovit-123/receipt_ocr
Usage:
```python
from transformers import AutoModelForVision2Seq, AutoProcessor
from PIL import Image
import torch
import argparse
model = AutoModelForVision2Seq.from_pretrained(
'sovitrath/receipt-ocr-full-ft',
device_map='auto',
torch_dtype=torch.bfloat16,
_attn_implementation='flash_attention_2' # Use `flash_attention_2` on Ampere GPUs and above and `eager` on older GPUs.
# _attn_implementation='eager', # Use `flash_attention_2` on Ampere GPUs and above and `eager` on older GPUs.
)
processor = AutoProcessor.from_pretrained('sovitrath/receipt-ocr-full-ft')
test_image = Image.open('inference_data/image_1.jpeg').convert('RGB')
def test(model, processor, image, max_new_tokens=1024, device='cuda'):
messages = [
{
'role': 'user',
'content': [
{'type': 'image'},
{'type': 'text', 'text': 'OCR this image accurately'}
]
},
]
# Prepare the text input by applying the chat template
text_input = processor.apply_chat_template(
messages, # Use the sample without the system message
add_generation_prompt=True
)
image_inputs = []
if image.mode != 'RGB':
image = image.convert('RGB')
image_inputs.append([image])
# Prepare the inputs for the model
model_inputs = processor(
#text=[text_input],
text=text_input,
images=image_inputs,
return_tensors='pt',
).to(device) # Move inputs to the specified device
# Generate text with the model
generated_ids = model.generate(**model_inputs, max_new_tokens=max_new_tokens)
# Trim the generated ids to remove the input ids
trimmed_generated_ids = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(model_inputs.input_ids, generated_ids)
]
# Decode the output text
output_text = processor.batch_decode(
trimmed_generated_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
return output_text[0] # Return the first decoded output text
output = test(model, processor, test_image)
print(output)
``` |
YatinSatija/eisenhow-matrix-pt-model | YatinSatija | 2025-05-05T13:50:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-05T13:50:07Z | # Task Prioritization Model
This document describes the implementation of our AI-powered task prioritization model that uses the Eisenhower Matrix approach to categorize and prioritize tasks.
## Model Architecture
The model is built using PyTorch and leverages the DistilBERT transformer architecture for natural language understanding. It consists of several key components:
### Base Model
- Uses DistilBERT (distilbert-base-uncased) as the foundation
- Processes task descriptions and context through transformer layers
- Outputs contextual embeddings of dimension 768
### Classification Heads
1. **Quadrant Classifier**
- Maps tasks to one of four Eisenhower Matrix quadrants:
- Quadrant 1 (0): "Do First" - Urgent & Important
- Quadrant 2 (1): "Schedule" - Important but Not Urgent
- Quadrant 3 (2): "Delegate" - Urgent but Not Important
- Quadrant 4 (3): "Don't Do" - Neither Urgent nor Important
2. **Urgency Head**
- Predicts task urgency score
- Single output neuron with sigmoid activation
- Range: 0 (not urgent) to 1 (very urgent)
3. **Importance Head**
- Predicts task importance score
- Single output neuron with sigmoid activation
- Range: 0 (not important) to 1 (very important)
## Model Input
The model accepts task descriptions as input, which are processed as follows:
- Maximum sequence length: 512 tokens
- Tokenization using DistilBERT tokenizer
- Input format: [CLS] + task description + [SEP]
## Model Output
The model provides three types of predictions:
1. **Quadrant Classification**
- Probability distribution over four quadrants
- Final prediction: highest probability quadrant
- Includes confidence score
2. **Urgency Score**
- Continuous value between 0 and 1
- Higher values indicate greater urgency
3. **Importance Score**
- Continuous value between 0 and 1
- Higher values indicate greater importance
## Usage Example
```python
from task_model import test_model
# Example task
task_description = "Prepare quarterly financial report due tomorrow"
# Get predictions
predictions = test_model("best_model.pt", task_description)
# Output format
{
'quadrant': "Do First", # Predicted quadrant
'urgency': 0.85, # Urgency score
'importance': 0.92, # Importance score
'confidence': 0.88 # Classification confidence
}
```
## Technical Requirements
- PyTorch 2.2.0
- Transformers 4.37.2
- CUDA-capable GPU (optional, for faster inference)
## Model Training
The model was trained on a dataset of labeled tasks with:
- Quadrant labels (0-3)
- Urgency scores (0-1)
- Importance scores (0-1)
Training process included:
- Fine-tuning of DistilBERT
- Multi-task learning for quadrant classification, urgency, and importance
- Validation on held-out test set
## Performance
The model achieves:
- High accuracy in quadrant classification
- Reliable urgency and importance scoring
- Fast inference time suitable for real-time task prioritization
## Future Improvements
Planned enhancements:
- Larger training dataset
- Multi-language support
- Context-aware prioritization
- Integration with calendar events
- Personalized prioritization based on user history |
mlfoundations-dev/d1_math_multiple_languages_0.3k | mlfoundations-dev | 2025-05-05T13:48:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T12:14:16Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_math_multiple_languages_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_math_multiple_languages_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_multiple_languages_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
quickstep3621/dippy-v3-1-9 | quickstep3621 | 2025-05-05T13:47:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T13:47:32Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
Beyturx/Beytur1 | Beyturx | 2025-05-05T13:44:44Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-05-05T13:38:23Z | ---
license: mit
---
Instagram Takip Etmeyenleri Çıkar - Demo Web Uygulaması
Bu küçük web uygulaması, manuel olarak girdiğiniz Instagram kullanıcı adlarını analiz ederek sizi takip etmeyenleri listeler.
## Özellikler
- Kendi takip ettiklerinizi ve sizi takip edenleri elle girersiniz.
- Sistem sizi takip etmeyenleri listeler.
- Gerçek Instagram hesabınıza bağlanmaz. Güvenlidir.
- HTML, CSS ve JavaScript ile yazılmıştır.
- Tarayıcı üzerinde çalışır. Sunucu gerekmez.
## Kullanım
1. `index.html` dosyasını bir tarayıcıda açın.
2. Takip ettiğiniz kullanıcıları virgül ile ayırarak girin.
3. Sizi takip eden kullanıcıları virgül ile ayırarak girin.
4. **Kontrol Et** butonuna basın.
5. Aşağıda sizi takip etmeyenlerin listesi çıkacaktır.
## Uyarı
Bu sadece bir simülasyondur. Gerçek Instagram API’si kullanılmaz.
Otomatik takip/çıkartma işlemleri için uygun değildir.
## Dosya Yapısı
```
instagram_cleaner_demo/
│
├── index.html # Ana HTML sayfası
├── style.css # Sayfa stilleri
├── script.js # JavaScript kontrol kodları
└── README.md # Bu dosya
```
## Lisans
Bu proje eğitim amaçlı hazırlanmıştır. Herhangi bir ticari amaçla kullanılması önerilmez. |
mlfoundations-dev/e1_science_longest_phi | mlfoundations-dev | 2025-05-05T12:27:37Z | 31 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T02:19:35Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: e1_science_longest_phi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# e1_science_longest_phi
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/e1_science_longest_phi dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
erdem-erdem/llama3.2-3b-it-coutdown-game-7k-qwq-r64-v0.2 | erdem-erdem | 2025-05-05T12:26:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T12:23:53Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** erdem-erdem
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tcepi/ner_produtos_catmat | tcepi | 2025-05-05T12:26:06Z | 0 | 0 | null | [
"safetensors",
"bert",
"generated_from_trainer",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"region:us"
] | null | 2025-05-05T11:50:23Z | ---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner_produtos_catmat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_produtos_catmat
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2256
- Precision: 0.8528
- Recall: 0.8805
- F1: 0.8664
- Accuracy: 0.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4023 | 1.0 | 1191 | 0.2583 | 0.8435 | 0.8704 | 0.8567 | 0.9207 |
| 0.2189 | 2.0 | 2382 | 0.2256 | 0.8528 | 0.8805 | 0.8664 | 0.9280 |
| 0.1757 | 3.0 | 3573 | 0.2435 | 0.8436 | 0.9003 | 0.8711 | 0.9230 |
| 0.1464 | 4.0 | 4764 | 0.2555 | 0.8646 | 0.8897 | 0.8770 | 0.9323 |
| 0.1199 | 5.0 | 5955 | 0.2744 | 0.8690 | 0.8845 | 0.8767 | 0.9307 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.5.0+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1
|
HYUKJUNCHOI/0505_llam_7ep_1e-4_10attn | HYUKJUNCHOI | 2025-05-05T12:23:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mllama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-11B-Vision-Instruct",
"base_model:finetune:unsloth/Llama-3.2-11B-Vision-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T12:23:03Z | ---
base_model: unsloth/Llama-3.2-11B-Vision-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** HYUKJUNCHOI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-11B-Vision-Instruct
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits